首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
We propose a Bayesian nonparametric instrumental variable approach under additive separability that allows us to correct for endogeneity bias in regression models where the covariate effects enter with unknown functional form. Bias correction relies on a simultaneous equations specification with flexible modeling of the joint error distribution implemented via a Dirichlet process mixture prior. Both the structural and instrumental variable equation are specified in terms of additive predictors comprising penalized splines for nonlinear effects of continuous covariates. Inference is fully Bayesian, employing efficient Markov chain Monte Carlo simulation techniques. The resulting posterior samples do not only provide us with point estimates, but allow us to construct simultaneous credible bands for the nonparametric effects, including data-driven smoothing parameter selection. In addition, improved robustness properties are achieved due to the flexible error distribution specification. Both these features are challenging in the classical framework, making the Bayesian one advantageous. In simulations, we investigate small sample properties and an investigation of the effect of class size on student performance in Israel provides an illustration of the proposed approach which is implemented in an R package bayesIV. Supplementary materials for this article are available online.  相似文献   

2.
Covariate measurement error occurs commonly in survival analysis. Under the proportional hazards model, measurement error effects have been well studied, and various inference methods have been developed to correct for error effects under such a model. In contrast, error-contaminated survival data under the additive hazards model have received relatively less attention. In this paper, we investigate this problem by exploring measurement error effects on parameter estimation and the change of the hazard function. New insights of measurement error effects are revealed, as opposed to well-documented results for the Cox proportional hazards model. We propose a class of bias correction estimators that embraces certain existing estimators as special cases. In addition, we exploit the regression calibration method to reduce measurement error effects. Theoretical results for the developed methods are established, and numerical assessments are conducted to illustrate the finite sample performance of our methods.  相似文献   

3.
The mixed linear model is a popular method for analysing unbalanced repeated measurement data. The classical statistical tests for parameters in this model are based on asymptotic theory that is unreliable in the small samples that are often encountered in practice. For testing a given fixed effect parameter with a small sample, we develop and investigate refined likelihood ratio (LR) tests. The refinements considered are the Bartlett correction and use of the Cox–Reid adjusted likelihood; these are examined separately and in combination. We illustrate the various LR tests on an actual data set and compare them in two simulation studies. The conventional LR test yields type I error rates that are higher than nominal. The adjusted LR test yields rates that are lower than nominal, with absolute accuracy similar to that of the conventional LR test in the first simulation study and better in the second. The Bartlett correction substantially improves the accuracy of the type I error rates with either the conventional or the adjusted LR test. In many cases, error rates that are very close to nominal are achieved with the refined methods.  相似文献   

4.
Abstract

Analogs of the classical one way MANOVA model have recently been suggested that do not assume that population covariance matrices are equal or that the error vector distribution is known. These tests are based on the sample mean and sample covariance matrix corresponding to each of the p populations. We show how to extend these tests using other measures of location such as the trimmed mean or coordinatewise median. These new bootstrap tests can have some outlier resistance, and can perform better than the tests based on the sample mean if the error vector distribution is heavy tailed.  相似文献   

5.
Time-series data are often subject to measurement error, usually the result of needing to estimate the variable of interest. Generally, however, the relationship between the surrogate variables and the true variables can be rather complicated compared to the classical additive error structure usually assumed. In this article, we address the estimation of the parameters in autoregressive models in the presence of function measurement errors. We first develop a parameter estimation method with the help of validation data; this estimation method does not depend on functional form and the distribution of the measurement error. The proposed estimator is proved to be consistent. Moreover, the asymptotic representation and the asymptotic normality of the estimator are also derived, respectively. Simulation results indicate that the proposed method works well for practical situation.  相似文献   

6.
We consider functional measurement error models, i.e. models where covariates are measured with error and yet no distributional assumptions are made about the mismeasured variable. We propose and study a score-type local test and an orthogonal series-based, omnibus goodness-of-fit test in this context, where no likelihood function is available or calculated-i.e. all the tests are proposed in the semiparametric model framework. We demonstrate that our tests have optimality properties and computational advantages that are similar to those of the classical score tests in the parametric model framework. The test procedures are applicable to several semiparametric extensions of measurement error models, including when the measurement error distribution is estimated non-parametrically as well as for generalized partially linear models. The performance of the local score-type and omnibus goodness-of-fit tests is demonstrated through simulation studies and analysis of a nutrition data set.  相似文献   

7.
We consider permutation tests based on a likelihood ratio like statistic for the one way or k sample design used in an example in Kolassa and Robinson [(2011), ‘Saddlepoint Approximations for Likelihood Ratio Like Statistics with Applications to Permutation Tests’, Annals of Statistics, 39, 3357–3368]. We give explicitly the region in which the statistic exists, obtaining results which permit calculation of the statistic on the boundary of this region. Numerical examples are given to illustrate improvement in the power of the tests compared to the classical statistics for long-tailed error distributions and no loss of power for normal error distributions.  相似文献   

8.
This article examines structural change tests based on generalized empirical likelihood methods in the time series context, allowing for dependent data. Standard structural change tests for the Generalized method of moments (GMM) are adapted to the generalized empirical likelihood (GEL) context. We show that when moment conditions are properly smoothed, these test statistics converge to the same asymptotic distribution as in the GMM, in cases with known and unknown breakpoints. New test statistics specific to GEL methods, and that are robust to weak identification, are also introduced. A simulation study examines the small sample properties of the tests and reveals that GEL-based robust tests performed well, both in terms of the presence and location of a structural change and in terms of the nature of identification.  相似文献   

9.
In many practical applications, high-dimensional regression analyses have to take into account measurement error in the covariates. It is thus necessary to extend regularization methods, that can handle the situation where the number of covariates p largely exceed the sample size n, to the case in which covariates are also mismeasured. A variety of methods are available in this context, but many of them rely on knowledge about the measurement error and the structure of its covariance matrix. In this paper, we set the goal to compare some of these methods, focusing on situations relevant for practical applications. In particular, we will evaluate these methods in setups in which the measurement error distribution and dependence structure are not known and have to be estimated from data. Our focus is on variable selection, and the evaluation is based on extensive simulations.  相似文献   

10.
We investigate the effect of measurement error on principal component analysis in the high‐dimensional setting. The effects of random, additive errors are characterized by the expectation and variance of the changes in the eigenvalues and eigenvectors. The results show that the impact of uncorrelated measurement error on the principal component scores is mainly in terms of increased variability and not bias. In practice, the error‐induced increase in variability is small compared with the original variability for the components corresponding to the largest eigenvalues. This suggests that the impact will be negligible when these component scores are used in classification and regression or for visualizing data. However, the measurement error will contribute to a large variability in component loadings, relative to the loading values, such that interpretation based on the loadings can be difficult. The results are illustrated by simulating additive Gaussian measurement error in microarray expression data from cancer tumours and control tissues.  相似文献   

11.
We consider the polynomial regression model in the presence of multiplicative measurement error in the predictor. Two general methods are considered, with the methods differing in their assumptions about the distributions of the predictor and the measurement errors. Consistent parameter estimates and asymptotic standard errors are derived by using estimating equation theory. Diagnostics are presented for distinguishing additive and multiplicative measurement error. Data from a nutrition study are analysed by using the methods. The results from a simulation study are presented and the performances of the methods are compared.  相似文献   

12.
《统计学通讯:理论与方法》2012,41(13-14):2545-2569
We study the general linear model (GLM) with doubly exchangeable distributed error for m observed random variables. The doubly exchangeable general linear model (DEGLM) arises when the m-dimensional error vectors are “doubly exchangeable,” jointly normally distributed, which is a much weaker assumption than the independent and identically distributed error vectors as in the case of GLM or classical GLM (CGLM). We estimate the parameters in the model and also find their distributions. We show that the tests of intercept and slope are possible in DEGLM as a particular case using parametric bootstrap as well as multivariate Satterthwaite approximation.  相似文献   

13.
It is well known that more powerful variants of Dickey–Fuller unit root tests are available. We apply two of these modifications, on the basis of simple maximum statistics and weighted symmetric estimation, to Perron tests allowing for structural change in trend of the additive outlier type. Local alternative asymptotic distributions of the modified test statistics are derived, and it is shown that their implementation can lead to appreciable finite sample and asymptotic gains in power over the standard tests. Also, these gains are largely comparable with those from GLS-based modifications to Perron tests, though some interesting differences do arise. This is the case for both exogenously and endogenously chosen break dates. For the latter choice, the new tests are applied to the Nelson–Plosser data.  相似文献   

14.
A structural regression model is considered in which some of the variables are measured with error. Instead of additive measurement errors, systematic biases are allowed by relating true and observed values via simple linear regressions. Additional data is available, based on standards, which allows for “calibration” of the measuring methods involved. Using only moment assumptions, some simple estimators are proposed and their asymptotic properties are developed. The results parallel and extend those given by Fuller (1987) in which the errors are additive and the error covariance is estimated. Maximum likelihood estimation is also discussed and the problem is illustrated using data from an acid rain study in which the relationship between pH and alkalinity is of interest but neither variable is observed exactly.  相似文献   

15.
This paper develops a likelihood‐based method for fitting additive models in the presence of measurement error. It formulates the additive model using the linear mixed model representation of penalized splines. In the presence of a structural measurement error model, the resulting likelihood involves intractable integrals, and a Monte Carlo expectation maximization strategy is developed for obtaining estimates. The method's performance is illustrated with a simulation study.  相似文献   

16.
ABSTRACT

Nakagami distribution is one of the most common distributions used to model positive valued and right skewed data. In this study, we interest goodness of fit problem for Nakagami distribution. Thus, we propose smooth tests for Nakagami distribution based on orthonormal functions. We also compare these tests with some classical goodness of fit tests such as Cramer–von Mises, Anderson–Darling, and Kolmogorov–Smirnov tests in respect to type-I error rates and powers of tests. Simulation study indicates that smooth tests give better results than these classical tests give in respect to almost all cases considered.  相似文献   

17.
This article deals with parameter estimation in the Cox proportional hazards model when covariates are measured with error. We consider both the classical additive measurement error model and a more general model which represents the mis-measured version of the covariate as an arbitrary linear function of the true covariate plus random noise. Only moment conditions are imposed on the distributions of the covariates and measurement error. Under the assumption that the covariates are measured precisely for a validation set, we develop a class of estimating equations for the vector-valued regression parameter by correcting the partial likelihood score function. The resultant estimators are proven to be consistent and asymptotically normal with easily estimated variances. Furthermore, a corrected version of the Breslow estimator for the cumulative hazard function is developed, which is shown to be uniformly consistent and, upon proper normalization, converges weakly to a zero-mean Gaussian process. Simulation studies indicate that the asymptotic approximations work well for practical sample sizes. The situation in which replicate measurements (instead of a validation set) are available is also studied.  相似文献   

18.
We study the effect of additive and multiplicative Berkson measurement error in Cox proportional hazard model. By plotting the true and the observed survivor function and the true and the observed hazard function dependent on the exposure one can get ideas about the effect of this type of error on the estimation of the slope parameter corresponding to the variable measured with error. As an example, we analyze the measurement error in the situation of the German Uranium Miners Cohort Study both with graphical methods and with a simulation study. We do not see a substantial bias in the presence of small measurement error and in the rare disease case. Even the effect of a Berkson measurement error with high variance, which is not unrealistic in our example, is a negligible attenuation of the observed effect. However, this effect is more pronounced for multiplicative measurement error.  相似文献   

19.
We propose a thresholding generalized method of moments (GMM) estimator for misspecified time series moment condition models. This estimator has the following oracle property: its asymptotic behavior is the same as of any efficient GMM estimator obtained under the a priori information that the true model were known. We propose data adaptive selection methods for thresholding parameter using multiple testing procedures. We determine the limiting null distributions of classical parameter tests and show the consistency of the corresponding block-bootstrap tests used in conjunction with thresholding GMM inference. We present the results of a simulation study for a misspecified instrumental variable regression model and for a vector autoregressive model with measurement error. We illustrate an application of the proposed methodology to data analysis of a real-world dataset.  相似文献   

20.
Summary. In many biomedical studies, covariates are subject to measurement error. Although it is well known that the regression coefficients estimators can be substantially biased if the measurement error is not accommodated, there has been little study of the effect of covariate measurement error on the estimation of the dependence between bivariate failure times. We show that the dependence parameter estimator in the Clayton–Oakes model can be considerably biased if the measurement error in the covariate is not accommodated. In contrast with the typical bias towards the null for marginal regression coefficients, the dependence parameter can be biased in either direction. We introduce a bias reduction technique for the bivariate survival function in copula models while assuming an additive measurement error model and replicated measurement for the covariates, and we study the large and small sample properties of the dependence parameter estimator proposed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号