首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 78 毫秒
1.
A penalized likelihood approach to the estimation of calibration factors in positron emission tomography (PET) is considered, in particular the problem of estimating the efficiency of PET detectors. Varying efficiencies among the detectors create a non-uniform performance and failure to account for the non-uniformities would lead to streaks in the image, so efficient estimation of the non-uniformities is desirable to reduce the propagation of noise to the final image. The relevant data set is provided by a blank scan, where a model may be derived that depends only on the sources affecting non-uniformities: inherent variation among the detector crystals and geometric effects. Physical considerations suggest a novel mixed inverse model with random crystal effects and smooth geometric effects. Using appropriate penalty terms, the penalized maximum likelihood estimates are derived and an efficient computational algorithm utilizing the fast Fourier transform is developed. Data-driven shrinkage and smoothing parameters are chosen to minimize an estimate of the predictive loss function. Various examples indicate that the approach proposed works well computationally and compares well with the standard method.  相似文献   

2.
In this article, we propose an efficient and robust estimation for the semiparametric mixture model that is a mixture of unknown location-shifted symmetric distributions. Our estimation is derived by minimizing the profile Hellinger distance (MPHD) between the model and a nonparametric density estimate. We propose a simple and efficient algorithm to find the proposed MPHD estimation. Monte Carlo simulation study is conducted to examine the finite sample performance of the proposed procedure and to compare it with other existing methods. Based on our empirical studies, the newly proposed procedure works very competitively compared to the existing methods for normal component cases and much better for non-normal component cases. More importantly, the proposed procedure is robust when the data are contaminated with outlying observations. A real data application is also provided to illustrate the proposed estimation procedure.  相似文献   

3.
In this paper, semiparametric methods are applied to estimate multivariate volatility functions, using a residual approach as in [J. Fan and Q. Yao, Efficient estimation of conditional variance functions in stochastic regression, Biometrika 85 (1998), pp. 645–660; F.A. Ziegelmann, Nonparametric estimation of volatility functions: The local exponential estimator, Econometric Theory 18 (2002), pp. 985–991; F.A. Ziegelmann, A local linear least-absolute-deviations estimator of volatility, Comm. Statist. Simulation Comput. 37 (2008), pp. 1543–1564], among others. Our main goal here is two-fold: (1) describe and implement a number of semiparametric models, such as additive, single-index and (adaptive) functional-coefficient, in volatility estimation, all motivated as alternatives to deal with the curse of dimensionality present in fully nonparametric models; and (2) propose the use of a variation of the traditional cross-validation method to deal with model choice in the class of adaptive functional-coefficient models, choosing simultaneously the bandwidth, the number of covariates in the model and also the single-index smoothing variable. The modified cross-validation algorithm is able to tackle the computational burden caused by the model complexity, providing an important tool in semiparametric volatility estimation. We briefly discuss model identifiability when estimating volatility as well as nonnegativity of the resulting estimators. Furthermore, Monte Carlo simulations for several underlying generating models are implemented and applications to real data are provided.  相似文献   

4.
ABSTRACT

Non-stationarity in bivariate time series of counts may be induced by a number of time-varying covariates affecting the bivariate responses due to which the innovation terms of the individual series as well as the bivariate dependence structure becomes non-stationary. So far, in the existing models, the innovation terms of individual INAR(1) series and the dependence structure are assumed to be constant even though the individual time series are non-stationary. Under this assumption, the reliability of the regression and correlation estimates is questionable. Besides, the existing estimation methodologies such as the conditional maximum likelihood (CMLE) and the composite likelihood estimation are computationally intensive. To address these issues, this paper proposes a BINAR(1) model where the innovation series follow a bivariate Poisson distribution under some non-stationary distributional assumptions. The method of generalized quasi-likelihood (GQL) is used to estimate the regression effects while the serial and bivariate correlations are estimated using a robust moment estimation technique. The application of model and estimation method is made in the simulated data. The GQL method is also compared with the CMLE, generalized method of moments (GMM) and generalized estimating equation (GEE) approaches where through simulation studies, it is shown that GQL yields more efficient estimates than GMM and equally or slightly more efficient estimates than CMLE and GEE.  相似文献   

5.
As an alternative to the local partial likelihood method of Tibshirani and Hastie and Fan, Gijbels, and King, a global partial likelihood method is proposed to estimate the covariate effect in a nonparametric proportional hazards model, λ(t|x) = exp{ψ(x)}λ(0)(t). The estimator, ψ?(x), reduces to the Cox partial likelihood estimator if the covariate is discrete. The estimator is shown to be consistent and semiparametrically efficient for linear functionals of ψ(x). Moreover, Breslow-type estimation of the cumulative baseline hazard function, using the proposed estimator ψ?(x), is proved to be efficient. The asymptotic bias and variance are derived under regularity conditions. Computation of the estimator involves an iterative but simple algorithm. Extensive simulation studies provide evidence supporting the theory. The method is illustrated with the Stanford heart transplant data set. The proposed global approach is also extended to a partially linear proportional hazards model and found to provide efficient estimation of the slope parameter. This article has the supplementary materials online.  相似文献   

6.
We consider efficient estimation of regression and association parameters jointly for bivariate current status data with the marginal proportional hazards model. Current status data occur in many fields including demographical studies and tumorigenicity experiments and several approaches have been proposed for regression analysis of univariate current status data. We discuss bivariate current status data and propose an efficient score estimation approach for the problem. In the approach, the copula model is used for joint survival function with the survival times assumed to follow the proportional hazards model marginally. Simulation studies are performed to evaluate the proposed estimates and suggest that the approach works well in practical situations. A real life data application is provided for illustration.  相似文献   

7.
Under the case-cohort design introduced by Prentice (Biometrica 73:1–11, 1986), the covariate histories are ascertained only for the subjects who experience the event of interest (i.e., the cases) during the follow-up period and for a relatively small random sample from the original cohort (i.e., the subcohort). The case-cohort design has been widely used in clinical and epidemiological studies to assess the effects of covariates on failure times. Most statistical methods developed for the case-cohort design use the proportional hazards model, and few methods allow for time-varying regression coefficients. In addition, most methods disregard data from subjects outside of the subcohort, which can result in inefficient inference. Addressing these issues, this paper proposes an estimation procedure for the semiparametric additive hazards model with case-cohort/two-phase sampling data, allowing the covariates of interest to be missing for cases as well as for non-cases. A more flexible form of the additive model is considered that allows the effects of some covariates to be time varying while specifying the effects of others to be constant. An augmented inverse probability weighted estimation procedure is proposed. The proposed method allows utilizing the auxiliary information that correlates with the phase-two covariates to improve efficiency. The asymptotic properties of the proposed estimators are established. An extensive simulation study shows that the augmented inverse probability weighted estimation is more efficient than the widely adopted inverse probability weighted complete-case estimation method. The method is applied to analyze data from a preventive HIV vaccine efficacy trial.  相似文献   

8.
Summary. The Cox proportional hazards model, which is widely used for the analysis of treatment and prognostic effects with censored survival data, makes the assumption that the hazard ratio is constant over time. Nonparametric estimators have been developed for an extended model in which the hazard ratio is allowed to change over time. Estimators based on residuals are appealing as they are easy to use and relate in a simple way to the more restricted Cox model estimator. After fitting a Cox model and calculating the residuals, one can obtain a crude estimate of the time-varying coefficients by adding a smooth of the residuals to the initial (constant) estimate. Treating the crude estimate as the fit, one can re-estimate the residuals. Iteration leads to consistent estimation of the nonparametric time-varying coefficients. This approach leads to clear guidelines for residual analysis in applications. The results are illustrated by an analysis of the Medical Research Council's myeloma trials, and by simulation.  相似文献   

9.
Abstract

In general, survival data are time-to-event data, such as time to death, time to appearance of a tumor, or time to recurrence of a disease. Models for survival data have frequently been based on the proportional hazards model, proposed by Cox. The Cox model has intensive application in the field of social, medical, behavioral and public health sciences. In this paper we propose a more efficient sampling method of recruiting subjects for survival analysis. We propose using a Moving Extreme Ranked Set Sampling (MERSS) scheme with ranking based on an easy-to-evaluate baseline auxiliary variable known to be associated with survival time. This paper demonstrates that this approach provides a more powerful testing procedure as well as a more efficient estimate of hazard ratio than that based on simple random sampling (SRS). Theoretical derivation and simulation studies are provided. The Iowa 65+ Rural study data are used to illustrate the methods developed in this paper.  相似文献   

10.
We incorporate a random effect into a multivariate discrete proportional hazards model and propose an efficient semiparametric Bayesian estimation method. By introducing a prior process for the parameters of baseline hazards, we consider a nonparametric estimation of baseline hazards function. Using a state space representation, we derive a dynamic modeling of baseline hazards function and propose an efficient block sampler for Markov chain Monte Carlo method. A numerical example using kidney patients data is given.  相似文献   

11.
Summary.  We develop an efficient way to select the best subset autoregressive model with exogenous variables and generalized autoregressive conditional heteroscedasticity errors. One main feature of our method is to select important autoregressive and exogenous variables, and at the same time to estimate the unknown parameters. The method proposed uses the stochastic search idea. By adopting Markov chain Monte Carlo techniques, we can identify the best subset model from a large of number of possible choices. A simulation experiment shows that the method is very effective. Misspecification in the mean equation can also be detected by our model selection method. In the application to the stock-market data of seven countries, the lagged 1 US return is found to have a strong influence on the other stock-market returns.  相似文献   

12.
This paper focuses on efficient estimation, optimal rates of convergence and effective algorithms in the partly linear additive hazards regression model with current status data. We use polynomial splines to estimate both cumulative baseline hazard function with monotonicity constraint and nonparametric regression functions with no such constraint. We propose a simultaneous sieve maximum likelihood estimation for regression parameters and nuisance parameters and show that the resultant estimator of regression parameter vector is asymptotically normal and achieves the semiparametric information bound. In addition, we show that rates of convergence for the estimators of nonparametric functions are optimal. We implement the proposed estimation through a backfitting algorithm on generalized linear models. We conduct simulation studies to examine the finite‐sample performance of the proposed estimation method and present an analysis of renal function recovery data for illustration.  相似文献   

13.
Abstract. A common practice in obtaining an efficient semiparametric estimate is through iteratively maximizing the (penalized) full log‐likelihood w.r.t. its Euclidean parameter and functional nuisance parameter. A rigorous theoretical study of this semiparametric iterative estimation approach is the main purpose of this study. We first show that the grid search algorithm produces an initial estimate with the proper convergence rate. Our second contribution is to provide a formula in calculating the minimal number of iterations k * needed to produce an efficient estimate . We discover that (i) k * depends on the convergence rates of the initial estimate and the nuisance functional estimate, and (ii) k * iterations are also sufficient for recovering the estimation sparsity in high dimensional data. The last contribution is the novel construction of which does not require knowing the explicit expression of the efficient score function. The above general conclusions apply to semiparametric models estimated under various regularizations, for example, kernel or penalized estimation. As far as we are aware, this study provides a first general theoretical justification for the ‘one‐/two‐step iteration’ phenomena observed in the semiparametric literature.  相似文献   

14.
Weighted log‐rank estimating function has become a standard estimation method for the censored linear regression model, or the accelerated failure time model. Well established statistically, the estimator defined as a consistent root has, however, rather poor computational properties because the estimating function is neither continuous nor, in general, monotone. We propose a computationally efficient estimator through an asymptotics‐guided Newton algorithm, in which censored quantile regression methods are tailored to yield an initial consistent estimate and a consistent derivative estimate of the limiting estimating function. We also develop fast interval estimation with a new proposal for sandwich variance estimation. The proposed estimator is asymptotically equivalent to the consistent root estimator and barely distinguishable in samples of practical size. However, computation time is typically reduced by two to three orders of magnitude for point estimation alone. Illustrations with clinical applications are provided.  相似文献   

15.
We introduce a new estimator of the conditional survival function given some subset of the covariate values under a proportional hazards regression. The new estimate does not require estimating the base-line cumulative hazard function. An estimate of the variance is given and is easy to compute, involving only those quantities that are routinely calculated in a Cox model analysis. The asymptotic normality of the new estimate is shown by using a central limit theorem for Kaplan–Meier integrals. We indicate the straightforward extension of the estimation procedure under models with multiplicative relative risks, including non-proportional hazards, and to stratified and frailty models. The estimator is applied to a gastric cancer study where it is of interest to predict patients' survival based only on measurements obtained before surgery, the time at which the most important prognostic variable, stage, becomes known.  相似文献   

16.
Twenty-four-hour urinary excretion of nicotine equivalents, a biomarker for exposure to cigarette smoke, has been widely used in biomedical studies in recent years. Its accurate estimate is important for examining human exposure to tobacco smoke. The objective of this article is to compare the bootstrap confidence intervals of nicotine equivalents with the standard confidence intervals derived from linear mixed model (LMM) and generalized estimation equation. We use percentile bootstrap method because it has practical value for real-life application and it works well with nicotine data. To preserve the within-subject correlation of nicotine equivalents between repeated measures, we bootstrap the repeated measures of each subject as a vector. The results indicate that the bootstrapped estimates in most cases give better estimates than the LMM and generalized estimation equation without bootstrap.  相似文献   

17.
For a GARCH(1,1) sequence or an AR(1) model with ARCH(1) errors, one can estimate the tail index by solving an estimating equation with unknown parameters replaced by the quasi maximum likelihood estimation, and a profile empirical likelihood method can be employed to effectively construct a confidence interval for the tail index. However, this requires that the errors of such a model have at least a finite fourth moment. In this article, we show that the finite fourth moment can be relaxed by employing a least absolute deviations estimate for the unknown parameters by noting that the estimating equation for determining the tail index is invariant to a scale transformation of the underlying model.  相似文献   

18.
The analysis of a sample of curves can be done by self-modelling regression methods. Within this framework we follow the ideas of nonparametric maximum likelihood estimation known from event history analysis and the counting process set-up. We derive an infinite dimensional score equation and from there we suggest an algorithm to estimate the shape function for a simple shape invariant model. The nonparametric maximum likelihood estimator that we find turns out to be a Nadaraya–Watson-like estimator, but unlike in the usual kernel smoothing situation we do not need to select a bandwidth or even a kernel function, since the score equation automatically selects the shape and the smoothing parameter for the estimation. We apply the method to a sample of electrophoretic spectra to illustrate how it works.  相似文献   

19.
In randomized clinical trials, a treatment effect on a time-to-event endpoint is often estimated by the Cox proportional hazards model. The maximum partial likelihood estimator does not make sense if the proportional hazard assumption is violated. Xu and O'Quigley (Biostatistics 1:423-439, 2000) proposed an estimating equation, which provides an interpretable estimator for the treatment effect under model misspecification. Namely it provides a consistent estimator for the log-hazard ratio among the treatment groups if the model is correctly specified, and it is interpreted as an average log-hazard ratio over time even if misspecified. However, the method requires the assumption that censoring is independent of treatment group, which is more restricted than that for the maximum partial likelihood estimator and is often violated in practice. In this paper, we propose an alternative estimating equation. Our method provides an estimator of the same property as that of Xu and O'Quigley under the usual assumption for the maximum partial likelihood estimation. We show that our estimator is consistent and asymptotically normal, and derive a consistent estimator of the asymptotic variance. If the proportional hazards assumption holds, the efficiency of the estimator can be improved by applying the covariate adjustment method based on the semiparametric theory proposed by Lu and Tsiatis (Biometrika 95:679-694, 2008).  相似文献   

20.
Hedonic price models are commonly used in the study of markets for various goods, most notably those for wine, art, and jewelry. These models were developed to estimate implicit prices of product attributes within a given product class, where in the case of some goods, such as wine, substantial product differentiation exists. To address this issue, recent research on wine prices employs local polynomial regression clustering (LPRC) for estimating regression models under class uncertainty. This study demonstrates that a superior empirical approach – estimation of a mixture model – is applicable to a hedonic model of wine prices, provided only that the dependent variable in the model is rescaled. The present study also catalogues several of the advantages over LPRC modeling of estimating mixture models.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号