首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In this paper, we study a working sub-model of partially linear model determined by variable selection. Such a sub-model is more feasible and practical in application, but usually biased. As a result, the common parameter estimators are inconsistent and the corresponding confidence regions are invalid. To deal with the problems relating to the model bias, a nonparametric adjustment procedure is provided to construct a partially unbiased sub-model. It is proved that both the adjusted restricted-model estimator and the adjusted preliminary test estimator are partially consistent, which means when the samples drop into some given subspaces, the estimators are consistent. Luckily, such subspaces are large enough in a certain sense and thus such a partial consistency is close to global consistency. Furthermore, we build a valid confidence region for parameters in the sub-model by the corresponding empirical likelihood.  相似文献   

2.
It is known that the profile empirical likelihood method based on estimating equations is computationally intensive when the number of nuisance parameters is large. Recently, Li, Peng, & Qi (2011) proposed a jackknife empirical likelihood method for constructing confidence regions for the parameters of interest by estimating the nuisance parameters separately. However, when the estimators for the nuisance parameters have no explicit formula, the computation of the jackknife empirical likelihood method is still intensive. In this paper, an approximate jackknife empirical likelihood method is proposed to reduce the computation in the jackknife empirical likelihood method when the nuisance parameters cannot be estimated explicitly. A simulation study confirms the advantage of the new method. The Canadian Journal of Statistics 40: 110–123; 2012 © 2012 Statistical Society of Canada  相似文献   

3.
In this study, adjustment of profile likelihood function of parameter of interest in presence of many nuisance parameters is investigated for survival regression models. Our objective is to extend the Barndorff–Nielsen’s technique to Weibull regression models for estimation of shape parameter in presence of many nuisance and regression parameters. We conducted Monte-Carlo simulation studies and a real data analysis, all of which demonstrate and suggest that the modified profile likelihood estimators outperform the profile likelihood estimators in terms of three comparison criterion: mean squared errors, bias and standard errors.  相似文献   

4.
Abstract.  This paper studies the representation and large-sample consistency for non-parametric maximum likelihood estimators (NPMLEs) of an unknown baseline continuous cumulative-hazard-type function and parameter of group survival difference, based on right-censored two-sample survival data with marginal survival function assumed to follow a transformation model, a slight generalization of the class of frailty survival regression models. The paper's main theoretical results are existence and unique a.s. limit, characterized variationally, for large data samples of the NPMLE of baseline nuisance function in an appropriately defined neighbourhood of the true function when the group difference parameter is fixed, leading to consistency of the NPMLE when the difference parameter is fixed at a consistent estimator of its true value. The joint NPMLE is also shown to be consistent. An algorithm for computing it numerically, based directly on likelihood equations in place of the expectation-maximization (EM) algorithm, is illustrated with real data.  相似文献   

5.
Nuisance parameter elimination is a central problem in capture–recapture modelling. In this paper, we consider a closed population capture–recapture model which assumes the capture probabilities varies only with the sampling occasions. In this model, the capture probabilities are regarded as nuisance parameters and the unknown number of individuals is the parameter of interest. In order to eliminate the nuisance parameters, the likelihood function is integrated with respect to a weight function (uniform and Jeffrey's) of the nuisance parameters resulting in an integrated likelihood function depending only on the population size. For these integrated likelihood functions, analytical expressions for the maximum likelihood estimates are obtained and it is proved that they are always finite and unique. Variance estimates of the proposed estimators are obtained via a parametric bootstrap resampling procedure. The proposed methods are illustrated on a real data set and their frequentist properties are assessed by means of a simulation study.  相似文献   

6.
Inference for a scalar interest parameter in the presence of nuisance parameters is considered in terms of the conditional maximum-likelihood estimator developed by Cox and Reid (1987). Parameter orthogonality is assumed throughout. The estimator is analyzed by means of stochastic asymptotic expansions in three cases: a scalar nuisance parameter, m nuisance parameters from m independent samples, and a vector nuisance parameter. In each case, the expansion for the conditional maximum-likelihood estimator is compared with that for the usual maximum-likelihood estimator. The means and variances are also compared. In each of the cases, the bias of the conditional maximum-likelihood estimator is unaffected by the nuisance parameter to first order. This is not so for the maximum-likelihood estimator. The assumption of parameter orthogonality is crucial in attaining this result. Regardless of parametrization, the difference in the two estimators is first-order and is deterministic to this order.  相似文献   

7.
Effective implementation of likelihood inference in models for high‐dimensional data often requires a simplified treatment of nuisance parameters, with these having to be replaced by handy estimates. In addition, the likelihood function may have been simplified by means of a partial specification of the model, as is the case when composite likelihood is used. In such circumstances tests and confidence regions for the parameter of interest may be constructed using Wald type and score type statistics, defined so as to account for nuisance parameter estimation or partial specification of the likelihood. In this paper a general analytical expression for the required asymptotic covariance matrices is derived, and suggestions for obtaining Monte Carlo approximations are presented. The same matrices are involved in a rescaling adjustment of the log likelihood ratio type statistic that we propose. This adjustment restores the usual chi‐squared asymptotic distribution, which is generally invalid after the simplifications considered. The practical implication is that, for a wide variety of likelihoods and nuisance parameter estimates, confidence regions for the parameters of interest are readily computable from the rescaled log likelihood ratio type statistic as well as from the Wald type and score type statistics. Two examples, a measurement error model with full likelihood and a spatial correlation model with pairwise likelihood, illustrate and compare the procedures. Wald type and score type statistics may give rise to confidence regions with unsatisfactory shape in small and moderate samples. In addition to having satisfactory shape, regions based on the rescaled log likelihood ratio type statistic show empirical coverage in reasonable agreement with nominal confidence levels.  相似文献   

8.
The hybrid bootstrap uses resampling ideas to extend the duality approach to the interval estimation for a parameter of interest when there are nuisance parameters. The confidence region constructed by the hybrid bootstrap may perform much better than the ordinary bootstrap region in a situation where the data provide substantial information about the nuisance parameter, but limited information about the parameter of interest. We apply this method to estimate the post-change mean after a change is detected by a stopping procedure in a sequence of independent normal variables. Since distribution theory in change point problems is generally a challenge, we use bootstrap simulation to find empirical distributions of test statistics and calculate critical thresholds. Both likelihood ratio and Bayesian test statistics are considered to set confidence regions for post-change means in the normal model. In the simulation studies, the performance of hybrid regions are compared with that of ordinary bootstrap regions in terms of the widths and coverage probabilities of confidence intervals.  相似文献   

9.
Assessing dose-response from flexible-dose clinical trials (e.g., titration or dose escalation studies) is challenging and often problematic due to the selection bias caused by 'titration-to-response'. We investigate the performance of a dynamic linear mixed-effects (DLME) model and marginal structural model (MSM) in evaluating dose-response from flexible-dose titration clinical trials via simulations. The simulation results demonstrated that DLME models with previous exposure as a time-varying covariate may provide an unbiased and efficient estimator to recover exposure-response relationship from flexible-dose clinical trials. Although the MSM models with independent and exchangeable working correlations appeared to be able to recover the right direction of the dose-response relationship, it tended to over-correct selection bias and overestimated the underlying true dose-response. The MSM estimators were also associated with large variability in the parameter estimates. Therefore, DLME may be an appropriate modeling option in identifying dose-response when data from fixed-dose studies are absent or a fixed-dose design is unethical to be implemented.  相似文献   

10.
This article deals with the issue of using a suitable pseudo-likelihood, instead of an integrated likelihood, when performing Bayesian inference about a scalar parameter of interest in the presence of nuisance parameters. The proposed approach has the advantages of avoiding the elicitation on the nuisance parameters and the computation of multidimensional integrals. Moreover, it is particularly useful when it is difficult, or even impractical, to write the full likelihood function.

We focus on Bayesian inference about a scalar regression coefficient in various regression models. First, in the context of non-normal regression-scale models, we give a theroetical result showing that there is no loss of information about the parameter of interest when using a posterior distribution derived from a pseudo-likelihood instead of the correct posterior distribution. Second, we present non trivial applications with high-dimensional, or even infinite-dimensional, nuisance parameters in the context of nonlinear normal heteroscedastic regression models, and of models for binary outcomes and count data, accounting also for possibile overdispersion. In all these situtations, we show that non Bayesian methods for eliminating nuisance parameters can be usefully incorporated into a one-parameter Bayesian analysis.  相似文献   

11.
In this article, we consider the variable selection and estimation for high-dimensional generalized linear models when the number of parameters diverges with the sample size. We propose a penalized quasi-likelihood function with the bridge penalty. The consistency and the Oracle property of the quasi-likelihood bridge estimators are obtained. Some simulations and a real data analysis are given to illustrate the performance of the proposed method.  相似文献   

12.
Fan J  Feng Y  Niu YS 《Annals of statistics》2010,38(5):2723-2750
Estimation of genewise variance arises from two important applications in microarray data analysis: selecting significantly differentially expressed genes and validation tests for normalization of microarray data. We approach the problem by introducing a two-way nonparametric model, which is an extension of the famous Neyman-Scott model and is applicable beyond microarray data. The problem itself poses interesting challenges because the number of nuisance parameters is proportional to the sample size and it is not obvious how the variance function can be estimated when measurements are correlated. In such a high-dimensional nonparametric problem, we proposed two novel nonparametric estimators for genewise variance function and semiparametric estimators for measurement correlation, via solving a system of nonlinear equations. Their asymptotic normality is established. The finite sample property is demonstrated by simulation studies. The estimators also improve the power of the tests for detecting statistically differentially expressed genes. The methodology is illustrated by the data from MicroArray Quality Control (MAQC) project.  相似文献   

13.
For the three-parameter gamma distribution, it is known that the method of moments as well as the maximum likelihood method have difficulties such as non-existence in some range of the parameters, convergence problems, and large variability. For this reason, in this article, we propose a method of estimation based on a transformation involving order statistics from the sample. In this method, the estimates always exist uniquely over the entire parameter space, and the estimators also have consistency over the entire parameter space. The bias and mean squared error of the estimators are also examined by means of a Monte Carlo simulation study, and the empirical results show the small-sample superiority in addition to the desirable large sample properties.  相似文献   

14.
Approximate conditional inference is developed for the slope parameter of the linear functional model with two variables. It is shown that the model can be transformed so that the slope parameter becomes an angle and nuisance parameters are radial distances. If the nuisance parameters are known an exact confidence interval based on a location-type conditional distribution is available for the angle. More gen¬erally, confidence distributions are used to average the conditional distribution over the nuisance parameters yielding an approximate conditional confidence interval that reflects the precision indicated by the data. An example is analyzed.  相似文献   

15.
In complete samples from a continuous cumulative distribution with unknown parameters, it is known that various pivotal functions can be constructed by appealing to the probability integral transform. A pivotal function (or simply pivot) is a function of the data and parameters that has the property that its distribution is free of any unknown parameters. Pivotal functions play a key role in constructing confidence intervals and hypothesis tests. If there are nuisance parameters in addition to a parameter of interest, and consistent estimators of the nuisance parameters are available, then substituting them into the pivot can preserve the pivot property while altering the pivot distribution, or may instead create a function that is approximately a pivot in the sense that its asymptotic distribution is free of unknown parameters. In this latter case, bootstrapping has been shown to be an effective way of estimating its distribution accurately and constructing confidence intervals that have more accurate coverage probability in finite samples than those based on the asymptotic pivot distribution. In this article, one particular pivotal function based on the probability integral transform is considered when nuisance parameters are estimated, and the estimation of its distribution using parametric bootstrapping is examined. Applications to finding confidence intervals are emphasized. This material should be of interest to instructors of upper division and beginning graduate courses in mathematical statistics who wish to integrate bootstrapping into their lessons on interval estimation and the use of pivotal functions.

[Received November 2014. Revised August 2015.]  相似文献   

16.
In high-dimensional data analysis, feature selection becomes one means for dimension reduction, which proceeds with parameter estimation. Concerning accuracy of selection and estimation, we study nonconvex constrained and regularized likelihoods in the presence of nuisance parameters. Theoretically, we show that constrained L(0)-likelihood and its computational surrogate are optimal in that they achieve feature selection consistency and sharp parameter estimation, under one necessary condition required for any method to be selection consistent and to achieve sharp parameter estimation. It permits up to exponentially many candidate features. Computationally, we develop difference convex methods to implement the computational surrogate through prime and dual subproblems. These results establish a central role of L(0)-constrained and regularized likelihoods in feature selection and parameter estimation involving selection. As applications of the general method and theory, we perform feature selection in linear regression and logistic regression, and estimate a precision matrix in Gaussian graphical models. In these situations, we gain a new theoretical insight and obtain favorable numerical results. Finally, we discuss an application to predict the metastasis status of breast cancer patients with their gene expression profiles.  相似文献   

17.
A simulation study is conducted to determine the effects of varying correlation structures on two estimation procedures used to model clustered binary data; a parametric model, the beta-binomial, and a non-parametric model, the exchangeable binary. The simulations detected bias in estimation of the mean response parameter and the correlation parameter when assuming a parametric model. In addition it was found that variance parameters can be severely underestimated if the correlation structure is considered strictly a nuisance parameter.  相似文献   

18.
One problem of skew normal model is the difficulty in estimating the shape parameter, for which the maximum likelihood estimate may be infinite when sample size is moderate. The existing estimators suffer from large bias even for moderate size samples. In this article, we proposed five estimators of the shape parameter for a scalar skew normal model, either by bias correction method or by solving a modified score equation. Simulation studies show that except bootstrap estimator, the proposed estimators have smaller bias compared to those estimators in literature for small and moderate samples.  相似文献   

19.
When variable selection with stepwise regression and model fitting are conducted on the same data set, competition for inclusion in the model induces a selection bias in coefficient estimators away from zero. In proportional hazards regression with right-censored data, selection bias inflates the absolute value of parameter estimate of selected parameters, while the omission of other variables may shrink coefficients toward zero. This paper explores the extent of the bias in parameter estimates from stepwise proportional hazards regression and proposes a bootstrap method, similar to those proposed by Miller (Subset Selection in Regression, 2nd edn. Chapman & Hall/CRC, 2002) for linear regression, to correct for selection bias. We also use bootstrap methods to estimate the standard error of the adjusted estimators. Simulation results show that substantial biases could be present in uncorrected stepwise estimators and, for binary covariates, could exceed 250% of the true parameter value. The simulations also show that the conditional mean of the proposed bootstrap bias-corrected parameter estimator, given that a variable is selected, is moved closer to the unconditional mean of the standard partial likelihood estimator in the chosen model, and to the population value of the parameter. We also explore the effect of the adjustment on estimates of log relative risk, given the values of the covariates in a selected model. The proposed method is illustrated with data sets in primary biliary cirrhosis and in multiple myeloma from the Eastern Cooperative Oncology Group.  相似文献   

20.
Wilks's theorem is useful for constructing confidence regions. When applying the popular empirical likelihood to data with nonignorable nonresponses, Wilks's phenomenon does not hold. This paper unveils that this is caused by the extra estimation of the nuisance parameter in the nonignorable nonresponse propensity. Motivated by this result, we propose an adjusted empirical likelihood for which Wilks's theorem holds. Asymptotic results are presented and supplemented by simulation results for finite sample performance of the point estimators and confidence regions. An analysis of a data set is included for illustration.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号