首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This paper deals with the identification of treatment effects using difference-in-differences estimators when several pretreatment periods are available. We define a family of identifying nonnested assumptions that lead to alternative difference-in-differences estimators. We show that the most usual difference-in-differences estimators imply equivalence conditions for the identifying nonnested assumptions. We further propose a model that can be used to test multiple equivalence conditions without imposing any of them. We conduct a Monte Carlo analysis and apply our approach to several recent papers to show its practical relevance.  相似文献   

2.
The generalized estimating equations (GEE) approach has attracted considerable interest for the analysis of correlated response data. This paper considers the model selection criterion based on the multivariate quasi‐likelihood (MQL) in the GEE framework. The GEE approach is closely related to the MQL. We derive a necessary and sufficient condition for the uniqueness of the risk function based on the MQL by using properties of differential geometry. Furthermore, we establish a formal derivation of model selection criterion as an asymptotically unbiased estimator of the prediction risk under this condition, and we explicitly take into account the effect of estimating the correlation matrix used in the GEE procedure.  相似文献   

3.
Patients with different characteristics (e.g., biomarkers, risk factors) may have different responses to the same medicine. Personalized medicine clinical studies that are designed to identify patient subgroup treatment efficacies can benefit patients and save medical resources. However, subgroup treatment effect identification complicates the study design in consideration of desired operating characteristics. We investigate three Bayesian adaptive models for subgroup treatment effect identification: pairwise independent, hierarchical, and cluster hierarchical achieved via Dirichlet Process (DP). The impact of interim analysis and longitudinal data modeling on the personalized medicine study design is also explored. Interim analysis is considered since they can accelerate personalized medicine studies in cases where early stopping rules for success or futility are met. We apply integrated two-component prediction method (ITP) for longitudinal data simulation, and simple linear regression for longitudinal data imputation to optimize the study design. The designs' performance in terms of power for the subgroup treatment effects and overall treatment effect, sample size, and study duration are investigated via simulation. We found the hierarchical model is an optimal approach to identifying subgroup treatment effects, and the cluster hierarchical model is an excellent alternative approach in cases where sufficient information is not available for specifying the priors. The interim analysis introduction to the study design lead to the trade-off between power and expected sample size via the adjustment of the early stopping criteria. The introduction of the longitudinal modeling slightly improves the power. These findings can be applied to future personalized medicine studies with discrete or time-to-event endpoints.  相似文献   

4.
A previously known result in the econometrics literature is that when covariates of an underlying data generating process are jointly normally distributed, estimates from a nonlinear model that is misspecified as linear can be interpreted as average marginal effects. This has been shown for models with exogenous covariates and separability between covariates and errors. In this paper, we extend this identification result to a variety of more general cases, in particular for combinations of separable and nonseparable models under both exogeneity and endogeneity. So long as the underlying model belongs to one of these large classes of data generating processes, our results show that nothing else must be known about the true DGP—beyond normality of observable data, a testable assumption—in order for linear estimators to be interpretable as average marginal effects. We use simulation to explore the performance of these estimators using a misspecified linear model and show they perform well when the data are normal but can perform poorly when this is not the case.  相似文献   

5.
Even in randomized experiments the identification of causal effects is often threatened by the presence of missing outcome values, with missingness possibly being non ignorable. We provide sufficient conditions under which the availability of a binary instrument for non response allows us to non parametrically point identify average causal effects in some latent subgroups of units, named Principal Strata, defined by their non response behavior in all possible combinations of treatment and instrument. Examples are provided as possible scenarios where our assumptions may be plausible.  相似文献   

6.
Claims that the parameters of an econometric model are invariant under changes in either policy rules or expectations processes entail super exogeneity and encompassing implications. Super exogeneity is always potentially refutable, and when both implications are involved, the Lucas critique is also refutable. We review the methodological background; the applicability of the Lucas critique; super exogeneity tests; the encompassing implications of feedback and feedforward models; and the role of incomplete information. The approach is applied to money demand in the u.S.A. to examine constancy, exogeneity, and encompassing, and reveals the Lucas critique to be inapplicable to the model under analysis.  相似文献   

7.
This article develops limit theory for likelihood analysis of weak exogeneity in I(2) cointegrated vector autoregressive (VAR) models incorporating deterministic terms. Conditions for weak exogeneity in I(2) VAR models are reviewed, and the asymptotic properties of conditional maximum likelihood estimators and a likelihood-based weak exogeneity test are then investigated. It is demonstrated that weak exogeneity in I(2) VAR models allows us to conduct asymptotic conditional inference based on mixed Gaussian distributions. It is then proved that a log-likelihood ratio test statistic for weak exogeneity in I(2) VAR models is asymptotically χ2 distributed. The article also presents an empirical illustration of the proposed test for weak exogeneity using Japan's macroeconomic data.  相似文献   

8.
The concepts associated with the identification problem are related directly to the probability distribution generating the data. This approach suggests simple examples that can be used for illustrative purposes and avoids the need to introduce simultaneous-equation models before discussing identification.  相似文献   

9.
This paper addresses the problem of identifying groups that satisfy the specific conditions for the means of feature variables. In this study, we refer to the identified groups as “target clusters” (TCs). To identify TCs, we propose a method based on the normal mixture model (NMM) restricted by a linear combination of means. We provide an expectation–maximization (EM) algorithm to fit the restricted NMM by using the maximum-likelihood method. The convergence property of the EM algorithm and a reasonable set of initial estimates are presented. We demonstrate the method's usefulness and validity through a simulation study and two well-known data sets. The proposed method provides several types of useful clusters, which would be difficult to achieve with conventional clustering or exploratory data analysis methods based on the ordinary NMM. A simple comparison with another target clustering approach shows that the proposed method is promising in the identification.  相似文献   

10.
Claims that the parameters of an econometric model are invariant under changes in either policy rules or expectations processes entail super exogeneity and encompassing implications. Super exogeneity is always potentially refutable, and when both implications are involved, the Lucas critique is also refutable. We review the methodological background; the applicability of the Lucas critique; super exogeneity tests; the encompassing implications of feedback and feedforward models; and the role of incomplete information. The approach is applied to money demand in the u.S.A. to examine constancy, exogeneity, and encompassing, and reveals the Lucas critique to be inapplicable to the model under analysis.  相似文献   

11.
We develop point-identification for the local average treatment effect when the binary treatment contains a measurement error. The standard instrumental variable estimator is inconsistent for the parameter since the measurement error is nonclassical by construction. We correct the problem by identifying the distribution of the measurement error based on the use of an exogenous variable that can even be a binary covariate. The moment conditions derived from the identification lead to generalized method of moments estimation with asymptotically valid inferences. Monte Carlo simulations and an empirical illustration demonstrate the usefulness of the proposed procedure.  相似文献   

12.
The use of logistic regression modeling has seen a great deal of attention in the literature in recent years. This includes all aspects of the logistic regression model including the identification of outliers. A variety of methods for the identification of outliers, such as the standardized Pearson residuals, are now available in the literature. These methods, however, are successful only if the data contain a single outlier. In the presence of multiple outliers in the data, which is often the case in practice, these methods fail to detect the outliers. This is due to the well-known problems of masking (false negative) and swamping (false positive) effects. In this article, we propose a new method for the identification of multiple outliers in logistic regression. We develop a generalized version of standardized Pearson residuals based on group deletion and then propose a technique for identifying multiple outliers. The performance of the proposed method is then investigated through several examples.  相似文献   

13.
We study the influence of a single data case on the results of a statistical analysis. This problem has been addressed in several articles for linear discriminant analysis (LDA). Kernel Fisher discriminant analysis (KFDA) is a kernel based extension of LDA. In this article, we study the effect of atypical data points on KFDA and develop criteria for identification of cases having a detrimental effect on the classification performance of the KFDA classifier. We find that the criteria are successful in identifying cases whose omission from the training data prior to obtaining the KFDA classifier results in reduced error rates.  相似文献   

14.
Summary The need to evaluate the performance of active labour market policies is not questioned any longer. Even though OECD countries spend significant shares of national resources on these measures, unemployment rates remain high or even increase. We focus on microeconometric evaluation which has to solve the fundamental evaluation problem and overcome the possible occurrence of selection bias. When using non-experimental data, different evaluation approaches can be thought of. The aim of this paper is to review the most relevant estimators, discuss their identifying assumptions and their (dis-)advantages. Thereby we will present estimators based on some form of exogeneity (selection on observables) as well as estimators where selection might also occur on unobservable characteristics. Since the possible occurrence of effect heterogeneity has become a major topic in evaluation research in recent years, we will also assess the ability of each estimator to deal with it. Additionally, we will also discuss some recent extensions of the static evaluation framework to allow for dynamic treatment evaluation. The authors thank Stephan L. Thomsen, Christopher Zeiss and one anonymous referee for valuable comments. The usual disclaimer applies.  相似文献   

15.
This article provides a strategy to identify the existence and direction of a causal effect in a generalized nonparametric and nonseparable model identified by instrumental variables. The causal effect concerns how the outcome depends on the endogenous treatment variable. The outcome variable, treatment variable, other explanatory variables, and the instrumental variable can be essentially any combination of continuous, discrete, or “other” variables. In particular, it is not necessary to have any continuous variables, none of the variables need to have large support, and the instrument can be binary even if the corresponding endogenous treatment variable and/or outcome is continuous. The outcome can be mismeasured or interval-measured, and the endogenous treatment variable need not even be observed. The identification results are constructive, and can be empirically implemented using standard estimation results.  相似文献   

16.
Johansen和Juselius协整检验应注意的几个问题   总被引:4,自引:0,他引:4  
Johansen和Juselius的似然比检验多变量协整关系的方法在实证分析中得到了广泛应用。在总结该方法的基础上,针对国内使用该方法存在比较混乱的状况指出了一些注意事项,譬如根据经济时间序列的数据生成过程选择确定性成分,检验临界值的使用以及协整关系个数的非唯一性等问题,还简要论述了阶数的确定、外生性与因果关系检验等问题,最后指出了该检验的一些不足。通过对上述问题的讨论,试图为实证研究人员在使用该方法时提供简单有效的指导性建议。  相似文献   

17.
In this article, we study the characterization of admissible linear estimators in a multivariate linear model with inequality constraint, under a matrix loss function. In the homogeneous class, we present several equivalent, necessary and sufficient conditions for a linear estimator of estimable functions to be admissible. In the inhomogeneous class, we find that the necessary and sufficient conditions depend on the rank of the matrix in the constraint. When the rank is greater than one, the necessary and sufficient conditions are obtained. When the rank is equal to one, we have necessary conditions and sufficient conditions separately. We also obtain the necessary and sufficient conditions for a linear estimator of inestimable function to be admissible in both classes.  相似文献   

18.
Non‐likelihood‐based methods for repeated measures analysis of binary data in clinical trials can result in biased estimates of treatment effects and associated standard errors when the dropout process is not completely at random. We tested the utility of a multiple imputation approach in reducing these biases. Simulations were used to compare performance of multiple imputation with generalized estimating equations and restricted pseudo‐likelihood in five representative clinical trial profiles for estimating (a) overall treatment effects and (b) treatment differences at the last scheduled visit. In clinical trials with moderate to high (40–60%) dropout rates with dropouts missing at random, multiple imputation led to less biased and more precise estimates of treatment differences for binary outcomes based on underlying continuous scores. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

19.
Drug‐induced organ toxicity (DIOT) that leads to the removal of marketed drugs or termination of candidate drugs has been a leading concern for regulatory agencies and pharmaceutical companies. In safety studies, the genomic assays are conducted after the treatment so that drug‐induced adverse effects can occur. Two types of biomarkers are observed: biomarkers of susceptibility and biomarkers of response. This paper presents a statistical model to distinguish two types of biomarkers and procedures to identify susceptible subpopulations. The biomarkers identified are used to develop classification model to identify susceptible subpopulation. Two methods to identify susceptibility biomarkers were evaluated in terms of predictive performance in subpopulation identification, including sensitivity, specificity, and accuracy. Method 1 considered the traditional linear model with a variable‐by‐treatment interaction term, and Method 2 considered fitting a single predictor variable model using only treatment data. Monte Carlo simulation studies were conducted to evaluate the performance of the two methods and impact of the subpopulation prevalence, probability of DIOT, and sample size on the predictive performance. Method 2 appeared to outperform Method 1, which was due to the lack of power for testing the interaction effect. Important statistical issues and challenges regarding identification of preclinical DIOT biomarkers were discussed. In summary, identification of predictive biomarkers for treatment determination highly depends on the subpopulation prevalence. When the proportion of susceptible subpopulation is 1% or less, a very large sample size is needed to ensure observing sufficient number of DIOT responses for biomarker and/or subpopulation identifications. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

20.
In fuzzy regression discontinuity (FRD) designs, the treatment effect is identified through a discontinuity in the conditional probability of treatment assignment. We show that when identification is weak (i.e., when the discontinuity is of a small magnitude), the usual t-test based on the FRD estimator and its standard error suffers from asymptotic size distortions as in a standard instrumental variables setting. This problem can be especially severe in the FRD setting since only observations close to the discontinuity are useful for estimating the treatment effect. To eliminate those size distortions, we propose a modified t-statistic that uses a null-restricted version of the standard error of the FRD estimator. Simple and asymptotically valid confidence sets for the treatment effect can be also constructed using this null-restricted standard error. An extension to testing for constancy of the regression discontinuity effect across covariates is also discussed. Supplementary materials for this article are available online.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号