首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
We are concerned with estimators which improve upon the best invariant estimator, in estimating a location parameter θ. If the loss function is L(θ - a) with L convex, we give sufficient conditions for the inadmissibility of δ0(X) = X. If the loss is a weighted sum of squared errors, we find various classes of estimators δ which are better than δ0. In general, δ is the convolution of δ1 (an estimator which improves upon δ0 outside of a compact set) with a suitable probability density in Rp. The critical dimension of inadmissibility depends on the estimator δ1 We also give several examples of estimators δ obtained in this way and state some open problems.  相似文献   

2.
In biostatistical applications interest often focuses on the estimation of the distribution of time T between two consecutive events. If the initial event time is observed and the subsequent event time is only known to be larger or smaller than an observed point in time, then the data is described by the well understood singly censored current status model, also known as interval censored data, case I. Jewell et al. (1994) extended this current status model by allowing the initial time to be unobserved, but with its distribution over an observed interval ' A, B ' known to be uniformly distributed; the data is referred to as doubly censored current status data. These authors used this model to handle application in AIDS partner studies focusing on the NPMLE of the distribution G of T . The model is a submodel of the current status model, but the distribution G is essentially the derivative of the distribution of interest F in the current status model. In this paper we establish that the NPMLE of G is uniformly consistent and that the resulting estimators for the n 1/2-estimable parameters are efficient. We propose an iterative weighted pool-adjacent-violator-algorithm to compute the estimator. It is also shown that, without smoothness assumptions, the NPMLE of F converges at rate n −2/5 in L 2-norm while the NPMLE of F in the non-parametric current status data model converges at rate n −1/3 in L 2-norm, which shows that there is a substantial gain in using the submodel information.  相似文献   

3.
Abstract.  The supremum difference between the cumulative sum diagram, and its greatest convex minorant (GCM), in case of non-parametric isotonic regression is considered. When the regression function is strictly increasing, and the design points are unequally spaced, but approximate a positive density in even a slow rate ( n −1/3), then the difference is shown to shrink in a very rapid (close to n −2/3) rate. The result is analogous to the corresponding result in case of a monotone density estimation established by Kiefer and Wolfowitz, but uses entirely different representation. The limit distribution of the GCM as a process on the unit interval is obtained when the design variables are i.i.d. with a positive density. Finally, a pointwise asymptotic normality result is proved for the smooth monotone estimator, obtained by the convolution of a kernel with the classical monotone estimator.  相似文献   

4.
Abstract.  We consider the problem of estimating a compactly supported density taking a Bayesian nonparametric approach. We define a Dirichlet mixture prior that, while selecting piecewise constant densities, has full support on the Hellinger metric space of all commonly dominated probability measures on a known bounded interval. We derive pointwise rates of convergence for the posterior expected density by studying the speed at which the posterior mass accumulates on shrinking Hellinger neighbourhoods of the sampling density. If the data are sampled from a strictly positive, α -Hölderian density, with α  ∈ ( 0,1] , then the optimal convergence rate n− α / (2 α +1) is obtained up to a logarithmic factor. Smoothing histograms by polygons, a continuous piecewise linear estimator is obtained that for twice continuously differentiable, strictly positive densities satisfying boundary conditions attains a rate comparable up to a logarithmic factor to the convergence rate n −4/5 for integrated mean squared error of kernel type density estimators.  相似文献   

5.
Abstract.  Consider the model Y = β ' X + ε . Let F 0 be the unknown cumulative distribution function of the random variable ε . Consistency of the semi-parametric Maximum likelihood estimator of ( β , F 0), denoted by     , has not been established under any interval censorship (IC) model. We prove in this paper that     is consistent under the mixed case IC model and some mild assumptions.  相似文献   

6.
Let F and G be lifetime distributions and consider the problem of estimating F −1 when it is known that G −1 F is star-shaped. Estimators of F −1 are considered here which are shown to be uniformly strongly consistent. The case of censored data is also presented. Asymptotic confidence intervals and bands for F −1 are provided. The result are applicable, for example, to the estimation of quantile functions of k -out-of- n systems in reliability. The special case of an IFRA distribution follows immediately from the more general case presented here  相似文献   

7.
Strategies for improving fixed non-negative kernel estimators have focused on reducing the bias, either by employing higher-order kernels or by adjusting the bandwidth locally. Intuitively, bandwidths in the tails should be relatively larger in order to reduce wiggles since there is less data available in the tails. We show that in regions where the density function is convex, it is theoretically possible to find local bandwidths such that the pointwise bias is exactly zero. The corresponding pointwise mean squared error converges at the parametric rate of O ( n −1 ) rather than the slower O ( n −4/5). These so-called zero-bias bandwidths are constant and are usually orders of magnitude larger than the optimal locally adaptive bandwidths predicted by asymptotic mean squared error analysis. We describe data-based algorithms for estimating zero-bias bandwidths over intervals where the density is convex. We find that our particular density estimator attains the usual O ( n −4/5) rate. However, we demonstrate that the algorithms can provide significant improvement in mean squared error, often clearly visually superior curves, and a new operating point in the usual bias-variance tradeoff.  相似文献   

8.
This paper considers the nonparametric deconvolution problem when the true density function is left (or right) truncated. We propose to remove the boundary effect of the conventional deconvolution density estimator by using a special class of kernels: the deconvolution boundary kernels. Methods for constructing such kernels are provided. The mean squared error properties, including the rates of convergence, are investigated for supersmooth and ordinary smooth errors. Numerical simulations show that the deconvolution boundary kernel estimator successfully removes the boundary effects of the conventional deconvolution density estimator.  相似文献   

9.
Abstract.  In this paper, we consider a stochastic volatility model ( Y t , V t ), where the volatility (V t ) is a positive stationary Markov process. We assume that ( ln V t ) admits a stationary density f that we want to estimate. Only the price process Y t is observed at n discrete times with regular sampling interval Δ . We propose a non-parametric estimator for f obtained by a penalized projection method. Under mixing assumptions on ( V t ), we derive bounds for the quadratic risk of the estimator. Assuming that Δ=Δ n tends to 0 while the number of observations and the length of the observation time tend to infinity, we discuss the rate of convergence of the risk. Examples of models included in this framework are given.  相似文献   

10.
We use Owen's (1988, 1990) empirical likelihood method in upgraded mixture models. Two groups of independent observations are available. One is z 1, ..., z n which is observed directly from a distribution F ( z ). The other one is x 1, ..., x m which is observed indirectly from F ( z ), where the x i s have density ∫ p ( x | z ) dF ( z ) and p ( x | z ) is a conditional density function. We are interested in testing H 0: p ( x | z ) = p ( x | z ; θ ), for some specified smooth density function. A semiparametric likelihood ratio based statistic is proposed and it is shown that it converges to a chi-squared distribution. This is a simple method for doing goodness of fit tests, especially when x is a discrete variable with finitely many values. In addition, we discuss estimation of θ and F ( z ) when H 0 is true. The connection between upgraded mixture models and general estimating equations is pointed out.  相似文献   

11.
It is shown that the classical Wicksell problem is related to a deconvolution problem where the convolution kernel is unbounded, convex and decreasing on (0, ∞). For that type of deconvolution problems, the usual non-parametric maximum likelihood estimator of the distribution function is shown not to exist. A sieved maximum likelihood estimator is defined, and some algorithms are described that can be used to compute this estimator. Moreover, this estimator is proved to be strongly consistent.  相似文献   

12.
Beta-Bernstein Smoothing for Regression Curves with Compact Support   总被引:5,自引:0,他引:5  
ABSTRACT. The problem of boundary bias is associated with kernel estimation for regression curves with compact support. This paper proposes a simple and uni(r)ed approach for remedying boundary bias in non-parametric regression, without dividing the compact support into interior and boundary areas and without applying explicitly different smoothing treatments separately. The approach uses the beta family of density functions as kernels. The shapes of the kernels vary according to the position where the curve estimate is made. Theyare symmetric at the middle of the support interval, and become more and more asymmetric nearer the boundary points. The kernels never put any weight outside the data support interval, and thus avoid boundary bias. The method is a generalization of classical Bernstein polynomials, one of the earliest methods of statistical smoothing. The proposed estimator has optimal mean integrated squared error at an order of magnitude n −4/5, equivalent to that of standard kernel estimators when the curve has an unbounded support.  相似文献   

13.
Nonparametric density estimation in the presence of measurement error is considered. The usual kernel deconvolution estimator seeks to account for the contamination in the data by employing a modified kernel. In this paper a new approach based on a weighted kernel density estimator is proposed. Theoretical motivation is provided by the existence of a weight vector that perfectly counteracts the bias in density estimation without generating an excessive increase in variance. In practice a data driven method of weight selection is required. Our strategy is to minimize the discrepancy between a standard kernel estimate from the contaminated data on the one hand, and the convolution of the weighted deconvolution estimate with the measurement error density on the other hand. We consider a direct implementation of this approach, in which the weights are optimized subject to sum and non-negativity constraints, and a regularized version in which the objective function includes a ridge-type penalty. Numerical tests suggest that the weighted kernel estimation can lead to tangible improvements in performance over the usual kernel deconvolution estimator. Furthermore, weighted kernel estimates are free from the problem of negative estimation in the tails that can occur when using modified kernels. The weighted kernel approach generalizes to the case of multivariate deconvolution density estimation in a very straightforward manner.  相似文献   

14.
We consider a one-dimensional diffusion process X , with ergodic property, with drift b ( x , θ) and diffusion coefficient a ( x , θ) depending on an unknown parameter θ that may be multidimensional. We are interested in the estimation of θ and dispose, for that purpose, of a discretized trajectory, observed at n equidistant times ti = iΔ , i = 0, ..., n . We study a particular class of estimating functions of the form ∑ f (θ, X t i −1) which, under the assumption that the integral of f with respect to the invariant measure is null, provide us with a consistent and asymptotically normal estimator. We determine the choice of f that yields the estimator with minimum asymptotic variance within the class and indicate how to construct explicit estimating functions based on the generator of the diffusion. Finally the theoretical study is completed with simulations.  相似文献   

15.
Abstract.  We focus on a class of non-standard problems involving non-parametric estimation of a monotone function that is characterized by n 1/3 rate of convergence of the maximum likelihood estimator, non-Gaussian limit distributions and the non-existence of     -regular estimators. We have shown elsewhere that under a null hypothesis of the type ψ ( z 0) =  θ 0 ( ψ being the monotone function of interest) in non-standard problems of the above kind, the likelihood ratio statistic has a 'universal' limit distribution that is free of the underlying parameters in the model. In this paper, we illustrate its limiting behaviour under local alternatives of the form ψ n ( z ), where ψ n (·) and ψ (·) vary in O ( n −1/3) neighbourhoods around z 0 and ψ n converges to ψ at rate n 1/3 in an appropriate metric. Apart from local alternatives, we also consider the behaviour of the likelihood ratio statistic under fixed alternatives and establish the convergence in probability of an appropriately scaled version of the same to a constant involving a Kullback–Leibler distance.  相似文献   

16.
This paper characterizes the family of Normal distributions within the class of exponential families of distributions, via the structure of the bias of the maximum likelihood estimator Θ n of the canonical parameter Θ . More specifically, when E θ ( Θ n ) – Θ = (1/ n ) Q ( Θ ) + o (1/ n ), the equality Q ( Θ ) = 0 proves to be a property of the Normal distribution only. The same conclusion is obtained for the one-dimensional case bt assuming that Q ( Θ ) is a polynomial of Θ .  相似文献   

17.
Abstract. The problem of estimating an unknown density function has been widely studied. In this article, we present a convolution estimator for the density of the responses in a nonlinear heterogenous regression model. The rate of convergence for the mean square error of the convolution estimator is of order n ?1 under certain regularity conditions. This is faster than the rate for the kernel density method. We derive explicit expressions for the asymptotic variance and the bias of the new estimator, and further a data‐driven bandwidth selector is proposed. We conduct simulation experiments to check the finite sample properties, and the convolution estimator performs substantially better than the kernel density estimator for well‐behaved noise densities.  相似文献   

18.
Abstract.  Given n independent and identically distributed observations in a set G  = {( x ,  y ) ∈ [0, 1] p  ×  R  : 0 ≤  y  ≤  g ( x )} with an unknown function g , called a boundary or frontier, it is desired to estimate g from the observations. The problem has several important applications including classification and cluster analysis, and is closely related to edge estimation in image reconstruction. The convex-hull estimator of a boundary or frontier is also very popular in econometrics, where it is a cornerstone of a method known as 'data envelope analysis'. In this paper, we give a large sample approximation of the distribution of the convex-hull estimator in the general case where p  ≥ 1. We discuss ways of using the large sample approximation to correct the bias of the convex-hull and the DEA estimators and to construct confidence intervals for the true function.  相似文献   

19.
Summary.  We consider the problem of estimating the noise variance in homoscedastic nonparametric regression models. For low dimensional covariates t  ∈  R d ,  d =1, 2, difference-based estimators have been investigated in a series of papers. For a given length of such an estimator, difference schemes which minimize the asymptotic mean-squared error can be computed for d =1 and d =2. However, from numerical studies it is known that for finite sample sizes the performance of these estimators may be deficient owing to a large finite sample bias. We provide theoretical support for these findings. In particular, we show that with increasing dimension d this becomes more drastic. If d 4, these estimators even fail to be consistent. A different class of estimators is discussed which allow better control of the bias and remain consistent when d 4. These estimators are compared numerically with kernel-type estimators (which are asymptotically efficient), and some guidance is given about when their use becomes necessary.  相似文献   

20.
Summary.  The paper discusses the estimation of an unknown population size n . Suppose that an identification mechanism can identify n obs cases. The Horvitz–Thompson estimator of n adjusts this number by the inverse of 1− p 0, where the latter is the probability of not identifying a case. When repeated counts of identifying the same case are available, we can use the counting distribution for estimating p 0 to solve the problem. Frequently, the Poisson distribution is used and, more recently, mixtures of Poisson distributions. Maximum likelihood estimation is discussed by means of the EM algorithm. For truncated Poisson mixtures, a nested EM algorithm is suggested and illustrated for several application cases. The algorithmic principles are used to show an inequality, stating that the Horvitz–Thompson estimator of n by using the mixed Poisson model is always at least as large as the estimator by using a homogeneous Poisson model. In turn, if the homogeneous Poisson model is misspecified it will, potentially strongly, underestimate the true population size. Examples from various areas illustrate this finding.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号