首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Nested error linear regression models using survey weights have been studied in small area estimation to obtain efficient model‐based and design‐consistent estimators of small area means. The covariates in these nested error linear regression models are not subject to measurement errors. In practical applications, however, there are many situations in which the covariates are subject to measurement errors. In this paper, we develop a nested error linear regression model with an area‐level covariate subject to functional measurement error. In particular, we propose a pseudo‐empirical Bayes (PEB) predictor to estimate small area means. This predictor borrows strength across areas through the model and makes use of the survey weights to preserve the design consistency as the area sample size increases. We also employ a jackknife method to estimate the mean squared prediction error (MSPE) of the PEB predictor. Finally, we report the results of a simulation study on the performance of our PEB predictor and associated jackknife MSPE estimator.  相似文献   

2.
We propose a class of Bayesian semiparametric mixed-effects models; its distinctive feature is the randomness of the grouping of observations, which can be inferred from the data. The model can be viewed under a more natural perspective, as a Bayesian semiparametric regression model on the log-scale; hence, in the original scale, the error is a mixture of Weibull densities mixed on both parameters by a normalized generalized gamma random measure, encompassing the Dirichlet process. As an estimate of the posterior distribution of the clustering of the random-effects parameters, we consider the partition minimizing the posterior expectation of a suitable class of loss functions. As a merely illustrative application of our model we consider a Kevlar fibre lifetime dataset (with censoring). We implement an MCMC scheme, obtaining posterior credibility intervals for the predictive distributions and for the quantiles of the failure times under different stress levels. Compared to a previous parametric Bayesian analysis, we obtain narrower credibility intervals and a better fit to the data. We found that there are three main clusters among the random-effects parameters, in accordance with previous frequentist analysis.  相似文献   

3.
Semiparametric Bayesian models are nowadays a popular tool in event history analysis. An important area of research concerns the investigation of frequentist properties of posterior inference. In this paper, we propose novel semiparametric Bayesian models for the analysis of competing risks data and investigate the Bernstein–von Mises theorem for differentiable functionals of model parameters. The model is specified by expressing the cause-specific hazard as the product of the conditional probability of a failure type and the overall hazard rate. We take the conditional probability as a smooth function of time and leave the cumulative overall hazard unspecified. A prior distribution is defined on the joint parameter space, which includes a beta process prior for the cumulative overall hazard. We first develop the large-sample properties of maximum likelihood estimators by giving simple sufficient conditions for them to hold. Then, we show that, under the chosen priors, the posterior distribution for any differentiable functional of interest is asymptotically equivalent to the sampling distribution derived from maximum likelihood estimation. A simulation study is provided to illustrate the coverage properties of credible intervals on cumulative incidence functions.  相似文献   

4.
Abstract.  This paper studies Cox's proportional hazards model under covariate measurement error. Nakamura's [ Biometrika 77 (1990) 127] methodology of corrected log-likelihood will be applied to the so-called Breslow likelihood, which is, in the absence of measurement error, equivalent to partial likelihood. For a general error model with possibly heteroscedastic and non-normal additive measurement error, corrected estimators of the regression parameter as well as of the baseline hazard rate are obtained. The estimators proposed by Nakamura [Biometrics 48 (1992) 829], Kong et al. [ Scand. J. Statist. 25 (1998) 573] and Kong & Gu [ Statistica Sinica 9 (1999) 953] are re-established in the special cases considered there. This sheds new light on these estimators and justifies them as exact corrected score estimators. Finally, the method will be extended to some variants of the Cox model.  相似文献   

5.
Bayesian inclusion probabilities have become a popular tool for variable assessment. From a frequentist perspective, it is often difficult to evaluate these probabilities as typically no Type I error rates are considered, neither are any explorations of power of the methods given. This paper considers how a frequentist may evaluate Bayesian inclusion probabilities for screening predictors. This evaluation looks at both unrestricted and restricted model spaces and develops a framework which a frequentist can utilize inclusion probabilities that preserve Type I error rates. Furthermore, this framework is applied to an analysis of the Arabidopsis thaliana with respect to determining quantitative trait loci associated with cotelydon opening angle.  相似文献   

6.
In randomized clinical trials, we are often concerned with comparing two-sample survival data. Although the log-rank test is usually suitable for this purpose, it may result in substantial power loss when the two groups have nonproportional hazards. In a more general class of survival models of Yang and Prentice (Biometrika 92:1–17, 2005), which includes the log-rank test as a special case, we improve model efficiency by incorporating auxiliary covariates that are correlated with the survival times. In a model-free form, we augment the estimating equation with auxiliary covariates, and establish the efficiency improvement using the semiparametric theories in Zhang et al. (Biometrics 64:707–715, 2008) and Lu and Tsiatis (Biometrics, 95:674–679, 2008). Under minimal assumptions, our approach produces an unbiased, asymptotically normal estimator with additional efficiency gain. Simulation studies and an application to a leukemia study show the satisfactory performance of the proposed method.  相似文献   

7.
In this note the problem of nonparametric regression function estimation in a random design regression model with Gaussian errors is considered from the Bayesian perspective. It is assumed that the regression function belongs to a class of functions with a known degree of smoothness. A prior distribution on the given class can be induced by a prior on the coefficients in a series expansion of the regression function through an orthonormal system. The rate of convergence of the resulting posterior distribution is employed to provide a measure of the accuracy of the Bayesian estimation procedure defined by the posterior expected regression function. We show that the Bayes’ estimator achieves the optimal minimax rate of convergence under mean integrated squared error over the involved class of regression functions, thus being comparable to other popular frequentist regression estimators.  相似文献   

8.
Covariate adjusted regression (CAR) is a recently proposed adjustment method for regression analysis where both the response and predictors are not directly observed [?entürk, D., Müller, H.G., 2005. Covariate adjusted regression. Biometrika 92, 75–89]. The available data have been distorted by unknown functions of an observable confounding covariate. CAR provides consistent estimators for the coefficients of the regression between the variables of interest, adjusted for the confounder. We develop a broader class of partial covariate adjusted regression (PCAR) models to accommodate both distorted and undistorted (adjusted/unadjusted) predictors. The PCAR model allows for unadjusted predictors, such as age, gender and demographic variables, which are common in the analysis of biomedical and epidemiological data. The available estimation and inference procedures for CAR are shown to be invalid for the proposed PCAR model. We propose new estimators and develop new inference tools for the more general PCAR setting. In particular, we establish the asymptotic normality of the proposed estimators and propose consistent estimators of their asymptotic variances. Finite sample properties of the proposed estimators are investigated using simulation studies and the method is also illustrated with a Pima Indians diabetes data set.  相似文献   

9.
This article addresses the problem of testing whether the vectors of regression coefficients are equal for two independent normal regression models when the error variances are unknown. This problem poses severe difficulties both to the frequentist and Bayesian approaches to statistical inference. In the former approach, normal hypothesis testing theory does not apply because of the unrelated variances. In the latter, the prior distributions typically used for the parameters are improper and hence the Bayes factor-based solution cannot be used.We propose a Bayesian solution to this problem in which no subjective input is considered. We first generate “objective” proper prior distributions (intrinsic priors) for which the Bayes factor and model posterior probabilities are well defined. The posterior probability of each model is used as a model selection tool. This consistent procedure of testing hypotheses is compared with some of the frequentist approximate tests proposed in the literature.  相似文献   

10.
A flexible Bayesian semiparametric accelerated failure time (AFT) model is proposed for analyzing arbitrarily censored survival data with covariates subject to measurement error. Specifically, the baseline error distribution in the AFT model is nonparametrically modeled as a Dirichlet process mixture of normals. Classical measurement error models are imposed for covariates subject to measurement error. An efficient and easy-to-implement Gibbs sampler, based on the stick-breaking formulation of the Dirichlet process combined with the techniques of retrospective and slice sampling, is developed for the posterior calculation. An extensive simulation study is conducted to illustrate the advantages of our approach.  相似文献   

11.
The Jeffreys-rule prior and the marginal independence Jeffreys prior are recently proposed in Fonseca et al. [Objective Bayesian analysis for the Student-t regression model, Biometrika 95 (2008), pp. 325–333] as objective priors for the Student-t regression model. The authors showed that the priors provide proper posterior distributions and perform favourably in parameter estimation. Motivated by a practical financial risk management application, we compare the performance of the two Jeffreys priors with other priors proposed in the literature in a problem of estimating high quantiles for the Student-t model with unknown degrees of freedom. Through an asymptotic analysis and a simulation study, we show that both Jeffreys priors perform better in using a specific quantile of the Bayesian predictive distribution to approximate the true quantile.  相似文献   

12.
In this work, a simulation study is conducted to evaluate the performance of Bayesian estimators for the log–linear exponential regression model under different levels of censoring and degrees of collinearity for two covariates. The diffuse normal, independent Student-t and multivariate Student-t distributions are considered as prior distributions and to draw from the posterior distributions, the Metropolis algorithm is implemented. Also, the results are compared with the maximum likelihood estimators in terms of the mean squared error, coverages and length of the credibility and confidence intervals.  相似文献   

13.
A new class of Bayesian estimators for a proportion in multistage binomial designs is considered. Priors belong to the beta-J distribution family, which is derived from the Fisher information associated with the design. The transposition of the beta parameters of the Haldane and the uniform priors in fixed binomial experiments into the beta-J distribution yields bias-corrected versions of these priors in multistage designs. We show that the estimator of the posterior mean based on the corrected Haldane prior and the estimator of the posterior mode based on the corrected uniform prior have good frequentist properties. An easy-to-use approximation of the estimator of the posterior mode is provided. The new Bayesian estimators are compared to Whitehead's and the uniformly minimum variance estimators through several multistage designs. Last, the bias of the estimator of the posterior mode is derived for a particular case.  相似文献   

14.
Principal component regression uses principal components (PCs) as regressors. It is particularly useful in prediction settings with high-dimensional covariates. The existing literature treating of Bayesian approaches is relatively sparse. We introduce a Bayesian approach that is robust to outliers in both the dependent variable and the covariates. Outliers can be thought of as observations that are not in line with the general trend. The proposed approach automatically penalises these observations so that their impact on the posterior gradually vanishes as they move further and further away from the general trend, corresponding to a concept in Bayesian statistics called whole robustness. The predictions produced are thus consistent with the bulk of the data. The approach also exploits the geometry of PCs to efficiently identify those that are significant. Individual predictions obtained from the resulting models are consolidated according to model-averaging mechanisms to account for model uncertainty. The approach is evaluated on real data and compared to its nonrobust Bayesian counterpart, the traditional frequentist approach and a commonly employed robust frequentist method. Detailed guidelines to automate the entire statistical procedure are provided. All required code is made available, see ArXiv:1711.06341.  相似文献   

15.
Abstract

In this paper, we propose a Bayesian two-stage design with changing hypothesis test by bridging a single-arm study and a double-arm randomized trial in one phase II clinical trial based on continuous endpoints rather than binary endpoints. We have also calibrated with respect to frequentist and Bayesian error rates. The proposed design minimizes the Bayesian expected sample size if the new candidate has low or high efficacy activity subject to the constraint upon error rates in both frequentist and Bayesian perspectives. Tables of designs for various combinations of design parameters are also provided.  相似文献   

16.
We propose a new class of semiparametric estimators for proportional hazards models in the presence of measurement error in the covariates, where the baseline hazard function, the hazard function for the censoring time, and the distribution of the true covariates are considered as unknown infinite dimensional parameters. We estimate the model components by solving estimating equations based on the semiparametric efficient scores under a sequence of restricted models where the logarithm of the hazard functions are approximated by reduced rank regression splines. The proposed estimators are locally efficient in the sense that the estimators are semiparametrically efficient if the distribution of the error‐prone covariates is specified correctly and are still consistent and asymptotically normal if the distribution is misspecified. Our simulation studies show that the proposed estimators have smaller biases and variances than competing methods. We further illustrate the new method with a real application in an HIV clinical trial.  相似文献   

17.
We formulate closed-form Bayesian estimators for two complementary Poisson rate parameters using double sampling with data subject to misclassification and error free data. We also derive closed-form Bayesian estimators for two misclassification parameters in the modified Poisson model we assume. We use our results to determine credible sets for the rate and misclassification parameters. Additionally, we use MCMC methods to determine Bayesian estimators for three or more rate parameters and the misclassification parameters. We also perform a limited Monte Carlo simulation to examine the characteristics of these estimators. We demonstrate the efficacy of the new Bayesian estimators and highest posterior density regions with examples using two real data sets.  相似文献   

18.
There has been extensive interest in discussing inference methods for survival data when some covariates are subject to measurement error. It is known that standard inferential procedures produce biased estimation if measurement error is not taken into account. With the Cox proportional hazards model a number of methods have been proposed to correct bias induced by measurement error, where the attention centers on utilizing the partial likelihood function. It is also of interest to understand the impact on estimation of the baseline hazard function in settings with mismeasured covariates. In this paper we employ a weakly parametric form for the baseline hazard function and propose simple unbiased estimating functions for estimation of parameters. The proposed method is easy to implement and it reveals the connection between the naive method ignoring measurement error and the corrected method with measurement error accounted for. Simulation studies are carried out to evaluate the performance of the estimators as well as the impact of ignoring measurement error in covariates. As an illustration we apply the proposed methods to analyze a data set arising from the Busselton Health Study [Knuiman, M.W., Cullent, K.J., Bulsara, M.K., Welborn, T.A., Hobbs, M.S.T., 1994. Mortality trends, 1965 to 1989, in Busselton, the site of repeated health surveys and interventions. Austral. J. Public Health 18, 129–135].  相似文献   

19.
Testing of a composite null hypothesis versus a composite alternative is considered when both have a related invariance structure. The goal is to develop conditional frequentist tests that allow the reporting of data-dependent error probabilities, error probabilities that have a strict frequentist interpretation and that reflect the actual amount of evidence in the data. The resulting tests are also seen to be Bayesian tests, in the strong sense that the reported frequentist error probabilities are also the posterior probabilities of the hypotheses under default choices of the prior distribution. The new procedures are illustrated in a variety of applications to model selection and multivariate hypothesis testing.  相似文献   

20.
This paper focuses on Bayesian shrinkage methods for covariance matrix estimation. We examine posterior properties and frequentist risks of Bayesian estimators based on new hierarchical inverse-Wishart priors. More precisely, we give the conditions for the existence of the posterior distributions. Advantages in terms of numerical simulations of posteriors are shown. A simulation study illustrates the performance of the estimation procedures under three loss functions for relevant sample sizes and various covariance structures.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号