首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Abstract. We propose a non‐linear density estimator, which is locally adaptive, like wavelet estimators, and positive everywhere, without a log‐ or root‐transform. This estimator is based on maximizing a non‐parametric log‐likelihood function regularized by a total variation penalty. The smoothness is driven by a single penalty parameter, and to avoid cross‐validation, we derive an information criterion based on the idea of universal penalty. The penalized log‐likelihood maximization is reformulated as an ?1‐penalized strictly convex programme whose unique solution is the density estimate. A Newton‐type method cannot be applied to calculate the estimate because the ?1‐penalty is non‐differentiable. Instead, we use a dual block coordinate relaxation method that exploits the problem structure. By comparing with kernel, spline and taut string estimators on a Monte Carlo simulation, and by investigating the sensitivity to ties on two real data sets, we observe that the new estimator achieves good L 1 and L 2 risk for densities with sharp features, and behaves well with ties.  相似文献   

2.
Abstract.  We consider models based on multivariate counting processes, including multi-state models. These models are specified semi-parametrically by a set of functions and real parameters. We consider inference for these models based on coarsened observations, focusing on families of smooth estimators such as produced by penalized likelihood. An important issue is the choice of model structure, for instance, the choice between a Markov and some non-Markov models. We define in a general context the expected Kullback–Leibler criterion and we show that the likelihood-based cross-validation (LCV) is a nearly unbiased estimator of it. We give a general form of an approximate of the leave-one-out LCV. The approach is studied by simulations, and it is illustrated by estimating a Markov and two semi-Markov illness–death models with application on dementia using data of a large cohort study.  相似文献   

3.
In this article, we propose a nonparametric estimator for percentiles of the time-to-failure distribution obtained from a linear degradation model using the kernel density method. The properties of the proposed kernel estimator are investigated and compared with well-known maximum likelihood and ordinary least squares estimators via a simulation technique. The mean squared error and the length of the bootstrap confidence interval are used as the basis criteria of the comparisons. The simulation study shows that the performance of the kernel estimator is acceptable as a general estimator. When the distribution of the data is assumed to be known, the maximum likelihood and ordinary least squares estimators perform better than the kernel estimator, while the kernel estimator is superior when the assumption of our knowledge of the data distribution is violated. A comparison among different estimators is achieved using a real data set.  相似文献   

4.
This paper is concerned with the ridge estimation of fixed and random effects in the context of Henderson's mixed model equations in the linear mixed model. For this purpose, a penalized likelihood method is proposed. A linear combination of ridge estimator for fixed and random effects is compared to a linear combination of best linear unbiased estimator for fixed and random effects under the mean-square error (MSE) matrix criterion. Additionally, for choosing the biasing parameter, a method of MSE under the ridge estimator is given. A real data analysis is provided to illustrate the theoretical results and a simulation study is conducted to characterize the performance of ridge and best linear unbiased estimators approach in the linear mixed model.  相似文献   

5.
Boundary and Bias Correction in Kernel Hazard Estimation   总被引:1,自引:0,他引:1  
A new class of local linear hazard estimators based on weighted least square kernel estimation is considered. The class includes the kernel hazard estimator of Ramlau-Hansen (1983), which has the same boundary correction property as the local linear regression estimator (see Fan & Gijbels, 1996). It is shown that all the local linear estimators in the class have the same pointwise asymptotic properties. We derive the multiplicative bias correction of the local linear estimator. In addition we propose a new bias correction technique based on bootstrap estimation of additive bias. This latter method has excellent theoretical properties. Based on an extensive simulation study where we compare the performance of competing estimators, we also recommend the use of the additive bias correction in applied work.  相似文献   

6.
Abstract.  A new kernel distribution function (df) estimator based on a non-parametric transformation of the data is proposed. It is shown that the asymptotic bias and mean squared error of the estimator are considerably smaller than that of the standard kernel df estimator. For the practical implementation of the new estimator a data-based choice of the bandwidth is proposed. Two possible areas of application are the non-parametric smoothed bootstrap and survival analysis. In the latter case new estimators for the survival function and the mean residual life function are derived.  相似文献   

7.
Four strategies for bias correction of the maximum likelihood estimator of the parameters in the Type I generalized logistic distribution are studied. First, we consider an analytic bias-corrected estimator, which is obtained by deriving an analytic expression for the bias to order n ?1; second, a method based on modifying the likelihood equations; third, we consider the jackknife bias-corrected estimator; and fourth, we consider two bootstrap bias-corrected estimators. All bias correction estimators are compared by simulation. Finally, an example with a real data set is also presented.  相似文献   

8.
Let g(x1,… , xk) be a symmetric function with k arguments. Let U be a U-statistic based on a random sample of size n with kernel function g . In this paper, the problem of estimating var(U) is considered. Several estimators are compared by computer simulations and we conclude that two estimators, one is constructed as a U-statistic and the other is the bootstrap estimator, give good estimates for many U-statistics.  相似文献   

9.
ABSTRACT

In this paper, we propose modified spline estimators for nonparametric regression models with right-censored data, especially when the censored response observations are converted to synthetic data. Efficient implementation of these estimators depends on the set of knot points and an appropriate smoothing parameter. We use three algorithms, the default selection method (DSM), myopic algorithm (MA), and full search algorithm (FSA), to select the optimum set of knots in a penalized spline method based on a smoothing parameter, which is chosen based on different criteria, including the improved version of the Akaike information criterion (AICc), generalized cross validation (GCV), restricted maximum likelihood (REML), and Bayesian information criterion (BIC). We also consider the smoothing spline (SS), which uses all the data points as knots. The main goal of this study is to compare the performance of the algorithm and criteria combinations in the suggested penalized spline fits under censored data. A Monte Carlo simulation study is performed and a real data example is presented to illustrate the ideas in the paper. The results confirm that the FSA slightly outperforms the other methods, especially for high censoring levels.  相似文献   

10.
In randomized clinical trials (RCTs), we may come across the situation in which some patients do not fully comply with their assigned treatment. For an experimental treatment with trichotomous levels, we derive the maximum likelihood estimator (MLE) of the risk ratio (RR) per level of dose increase in a RCT with noncompliance. We further develop three asymptotic interval estimators for the RR. To evaluate and compare the finite sample performance of these interval estimators, we employ Monte Carlo simulation. When the number of patients per treatment is large, we find that all interval estimators derived in this paper can perform well. When the number of patients is not large, we find that the interval estimator using Wald’s statistic can be liberal, while the interval estimator using the logarithmic transformation of the MLE can lose precision. We note that use of a bootstrap variance estimate in this case may alleviate these concerns. We further note that an interval estimator combining interval estimators using Wald’s statistic and the logarithmic transformation can generally perform well with respect to the coverage probability, and be generally more efficient than interval estimators using bootstrap variance estimates when RR>1. Finally, we use the data taken from a study of vitamin A supplementation to reduce mortality in preschool children to illustrate the use of these estimators.  相似文献   

11.
Log-normal linear models are widely used in applications, and many times it is of interest to predict the response variable or to estimate the mean of the response variable at the original scale for a new set of covariate values. In this paper we consider the problem of efficient estimation of the conditional mean of the response variable at the original scale for log-normal linear models. Several existing estimators are reviewed first, including the maximum likelihood (ML) estimator, the restricted ML (REML) estimator, the uniformly minimum variance unbiased (UMVU) estimator, and a bias-corrected REML estimator. We then propose two estimators that minimize the asymptotic mean squared error and the asymptotic bias, respectively. A parametric bootstrap procedure is also described to obtain confidence intervals for the proposed estimators. Both the new estimators and the bootstrap procedure are very easy to implement. Comparisons of the estimators using simulation studies suggest that our estimators perform better than the existing ones, and the bootstrap procedure yields confidence intervals with good coverage properties. A real application of estimating the mean sediment discharge is used to illustrate the methodology.  相似文献   

12.
In high-dimensional data analysis, penalized likelihood estimators are shown to provide superior results in both variable selection and parameter estimation. A new algorithm, APPLE, is proposed for calculating the Approximate Path for Penalized Likelihood Estimators. Both convex penalties (such as LASSO) and folded concave penalties (such as MCP) are considered. APPLE efficiently computes the solution path for the penalized likelihood estimator using a hybrid of the modified predictor-corrector method and the coordinate-descent algorithm. APPLE is compared with several well-known packages via simulation and analysis of two gene expression data sets.  相似文献   

13.
Non‐parametric estimation and bootstrap techniques play an important role in many areas of Statistics. In the point process context, kernel intensity estimation has been limited to exploratory analysis because of its inconsistency, and some consistent alternatives have been proposed. Furthermore, most authors have considered kernel intensity estimators with scalar bandwidths, which can be very restrictive. This work focuses on a consistent kernel intensity estimator with unconstrained bandwidth matrix. We propose a smooth bootstrap for inhomogeneous spatial point processes. The consistency of the bootstrap mean integrated squared error (MISE) as an estimator of the MISE of the consistent kernel intensity estimator proves the validity of the resampling procedure. Finally, we propose a plug‐in bandwidth selection procedure based on the bootstrap MISE and compare its performance with several methods currently used through both as a simulation study and an application to the spatial pattern of wildfires registered in Galicia (Spain) during 2006.  相似文献   

14.
The present paper studies the minimum Hellinger distance estimator by recasting it as the maximum likelihood estimator in a data driven modification of the model density. In the process, the Hellinger distance itself is expressed as a penalized log likelihood function. The penalty is the sum of the model probabilities over the non-observed values of the sample space. A comparison of the modified model density with the original data provides insights into the robustness of the minimum Hellinger distance estimator. Adjustments of the amount of penalty leads to a class of minimum penalized Hellinger distance estimators, some members of which perform substantially better than the minimum Hellinger distance estimator at the model for small samples, without compromising the robustness properties of the latter.  相似文献   

15.
This paper deals with the convergence in Mallows metric for classical multivariate kernel distribution function estimators. We prove the convergence in Mallows metric of a locally orientated kernel smooth estimator belonging to the class of sample smoothing estimators. The consistency follows for the smoothed bootstrap for regular functions of the marginal means. Two simple simulation studies show how the smoothed versions of the bootstrap give better results than the classical technique.  相似文献   

16.
We derive analytic expressions for the biases of the maximum likelihood estimators of the scale parameter in the half-logistic distribution with known location, and of the location parameter when the latter is unknown. Using these expressions to bias-correct the estimators is highly effective, without adverse consequences for estimation mean squared error. The overall performance of the first of these bias-corrected estimators is slightly better than that of a bootstrap bias-corrected estimator. The bias-corrected estimator of the location parameter significantly out-performs its bootstrapped-based counterpart. Taking computational costs into account, the analytic bias corrections clearly dominate the use of the bootstrap.  相似文献   

17.
We consider two approaches for bias evaluation and reduction in the proportional hazards model proposed by Cox. The first one is an analytical approach in which we derive the n-1 bias term of the maximum partial likelihood estimator. The second approach consists of resampling methods, namely the jackknife and the bootstrap. We compare all methods through a comprehensive set of Monte Carlo simulations. The results suggest that bias-corrected estimators have better finite-sample performance than the standard maximum partial likelihood estimator. There is some evidence oithe bootstrap-correction superiority over the jackknife-correction but its performance is similar to the analytical estimator. Finaily an application iliustrates the proposed approaches.  相似文献   

18.
Consider the problem of estimating the intra-class correlation coefficient of a symmetric normal distribution. In a recent article (Pal and Lim (1999)) it has been shown that the three popular estimators, namely—the maximum likelihood estimator (MLE), the method of moments estimator (MME) and the unique minimum variance unbiased estimator (UMVUE), are second order admissible under the squared error loss function. In this paper we study the performance of the above mentioned estimators in terms of Pitman Nearness Criterion (PNC) as well as Stochastic Domination Criterion (SDC). We then apply the aforementioned estimators to two real life data sets with moderate to large sample sizes, and bootstrap bias as well as mean squared errors are computed to compare the estimators. In terms of overall performance the MME seems most appealing among the three estimators considered here and this is the main contribution of our paper. Formerly University of Southewestern Louisisna  相似文献   

19.
Problems with censored data arise quite frequently in reliability applications. Estimation of the reliability function is usually of concern. Reliability function estimators proposed by Kaplan and Meier (1958), Breslow (1972), are generally used when dealing with censored data. These estimators have the known properties of being asymptotically unbiased, uniformly strongly consistent, and weakly convergent to the same Gaussian process, when properly normalized. We study the properties of the smoothed Kaplan-Meier estimator with a suitable kernel function in this paper. The smooth estimator is compared with the Kaplan-Meier and Breslow estimators for large sample sizes giving an exact expression for an appropriately normalized difference of the mean square error (MSE) of the two estimators. This quantifies the deficiency of the Kaplan-Meier estimator in comparison to the smoothed version. We also obtain a non-asymptotic bound on an expected 1-type error under weak conditions. Some simulations are carried out to examine the performance of the suggested method.  相似文献   

20.
In survival studies, current status data are frequently encountered when some individuals in a study are not successively observed. This paper considers the problem of simultaneous variable selection and parameter estimation in the high-dimensional continuous generalized linear model with current status data. We apply the penalized likelihood procedure with the smoothly clipped absolute deviation penalty to select significant variables and estimate the corresponding regression coefficients. With a proper choice of tuning parameters, the resulting estimator is shown to be a root n/pn-consistent estimator under some mild conditions. In addition, we show that the resulting estimator has the same asymptotic distribution as the estimator obtained when the true model is known. The finite sample behavior of the proposed estimator is evaluated through simulation studies and a real example.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号