首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 140 毫秒
1.
Stationary renewal point processes are defined by the probability distribution of the distances between successive points (lifetimes) that are independent and identically distributed random variables. For some applications it is also interesting to define the properties of a renewal process by using the renewal density. There are well-known expressions of this density in terms of the probability density of the lifetimes. It is more difficult to solve the inverse problem consisting in the determination of the density of the lifetimes in terms of the renewal density. Theoretical expressions between their Laplace transforms are available but the inversion of these transforms is often very difficult to obtain in closed form. We show that this is possible for renewal processes presenting a dead-time property characterized by the fact that the renewal density is zero in an interval including the origin. We present the principle of a recursive method allowing the solution of this problem and we apply this method to the case of some processes with input dead-time. Computer simulations on Poisson and Erlang (2) processes show quite good agreement between theoretical calculations and experimental measurements on simulated data.  相似文献   

2.
We consider the problem of estimating the mean and variance of the time between occurrences of an event of interest (inter-occurrences times) where some forms of dependence between two consecutive time intervals are allowed. Two basic density functions are taken into account. They are the Weibull and the generalised exponential density functions. In order to capture the dependence between two consecutive inter-occurrences times, we assume that either the shape and/or the scale parameters of the two density functions are given by auto-regressive models. The expressions for the mean and variance of the inter-occurrences times are presented. The models are applied to the ozone data from two regions of Mexico City. The estimation of the parameters is performed using a Bayesian point of view via Markov chain Monte Carlo (MCMC) methods.  相似文献   

3.
Smoothed nonparametric kernel spectral density estimates are considered for stationary data observed on a d-dimensional lattice. The implications for edge effect bias of the choice of kernel and bandwidth are considered. Under some circumstances the bias can be dominated by the edge effect. We show that this problem can be mitigated by tapering. Some extensions and related issues are discussed.  相似文献   

4.
Estimating a curve nonparametrically from data measured with error is a difficult problem that has been studied by many authors. Constructing a consistent estimator in this context can sometimes be quite challenging, and in this paper we review some of the tools that have been developed in the literature for kernel‐based approaches, founded on the Fourier transform and a more general unbiased score technique. We use those tools to rederive some of the existing nonparametric density and regression estimators for data contaminated by classical or Berkson errors, and discuss how to compute these estimators in practice. We also review some mistakes made by those working in the area, and highlight a number of problems with an existing R package decon .  相似文献   

5.
We develop Bayesian models for density regression with emphasis on discrete outcomes. The problem of density regression is approached by considering methods for multivariate density estimation of mixed scale variables, and obtaining conditional densities from the multivariate ones. The approach to multivariate mixed scale outcome density estimation that we describe represents discrete variables, either responses or covariates, as discretised versions of continuous latent variables. We present and compare several models for obtaining these thresholds in the challenging context of count data analysis where the response may be over‐ and/or under‐dispersed in some of the regions of the covariate space. We utilise a nonparametric mixture of multivariate Gaussians to model the directly observed and the latent continuous variables. The paper presents a Markov chain Monte Carlo algorithm for posterior sampling, sufficient conditions for weak consistency, and illustrations on density, mean and quantile regression utilising simulated and real datasets.  相似文献   

6.
We consider the problem of estimation of a density function in the presence of incomplete data and study the Hellinger distance between our proposed estimators and the true density function. Here, the presence of incomplete data is handled by utilizing a Horvitz–Thompson-type inverse weighting approach, where the weights are the estimates of the unknown selection probabilities. We also address the problem of estimation of a regression function with incomplete data.  相似文献   

7.
The estimation of a multivariate function from a stationary m-dependent process is investigated, with a special focus on the case where m is large or unbounded. We develop an adaptive estimator based on wavelet methods. Under flexible assumptions on the nonparametric model, we prove the good performances of our estimator by determining sharp rates of convergence under two kinds of errors: the pointwise mean squared error and the mean integrated squared error. We illustrate our theoretical result by considering the multivariate density estimation problem, the derivatives density estimation problem, the density estimation problem in a GARCH-type model and the multivariate regression function estimation problem. The performance of proposed estimator has been shown by a numerical study for a simulated and real data sets.  相似文献   

8.
We consider a type II censored sample data from a two-truncation parameter density and obtain the UMVU estimator for an U-estimable parametric function. An explicit expression for the estimator is derived and some interesting special cases are developed. The shortest length confidence interval for the density is also obtained.  相似文献   

9.
We identify a role for smooth curve provision in the finite population context. The performance of kernel density estimates in this scenario is explored, and they are tailored to the finite population situation especially by developing a method of data-based selection of the smoothing parameter appropriate to this problem. Simulated examples are given, including some from the particular context of permutation distributions which first motivated this investigation.  相似文献   

10.
We propose a new approach for outlier detection, based on a ranking measure that focuses on the question of whether a point is ‘central’ for its nearest neighbours. Using our notations, a low cumulative rank implies that the point is central. For instance, a point centrally located in a cluster has a relatively low cumulative sum of ranks because it is among the nearest neighbours of its own nearest neighbours, but a point at the periphery of a cluster has a high cumulative sum of ranks because its nearest neighbours are closer to each other than the point. Use of ranks eliminates the problem of density calculation in the neighbourhood of the point and this improves the performance. Our method performs better than several density-based methods on some synthetic data sets as well as on some real data sets.  相似文献   

11.
Bandwidth selection is an important problem of kernel density estimation. Traditional simple and quick bandwidth selectors usually oversmooth the density estimate. Existing sophisticated selectors usually have computational difficulties and occasionally do not exist. Besides, they may not be robust against outliers in the sample data, and some are highly variable, tending to undersmooth the density. In this paper, a highly robust simple and quick bandwidth selector is proposed, which adapts to different types of densities.  相似文献   

12.
A density estimation method in a Bayesian nonparametric framework is presented when recorded data are not coming directly from the distribution of interest, but from a length biased version. From a Bayesian perspective, efforts to computationally evaluate posterior quantities conditionally on length biased data were hindered by the inability to circumvent the problem of a normalizing constant. In this article, we present a novel Bayesian nonparametric approach to the length bias sampling problem that circumvents the issue of the normalizing constant. Numerical illustrations as well as a real data example are presented and the estimator is compared against its frequentist counterpart, the kernel density estimator for indirect data of Jones.  相似文献   

13.
In this paper, we consider the problems of prediction and tests of hypotheses for directional data in a semiparametric Bayesian set-up. Observations are assumed to be independently drawn from the von Mises distribution and uncertainty in the location parameter is modelled by a Dirichlet process. For the prediction problem, we present a method to obtain the predictive density of a future observation, and, for the testing problem, we present a method of computing the Bayes factor by obtaining the posterior probabilities of the hypotheses under consideration. The semiparametric model is seen to be flexible and robust against prior misspecifications. While analytical expressions are intractable, the methods are easily implemented using the Gibbs sampler. We illustrate the methods with data from two real-life examples.  相似文献   

14.
We consider the problem of density estimation when the data is in the form of a continuous stream with no fixed length. In this setting, implementations of the usual methods of density estimation such as kernel density estimation are problematic. We propose a method of density estimation for massive datasets that is based upon taking the derivative of a smooth curve that has been fit through a set of quantile estimates. To achieve this, a low-storage, single-pass, sequential method is proposed for simultaneous estimation of multiple quantiles for massive datasets that form the basis of this method of density estimation. For comparison, we also consider a sequential kernel density estimator. The proposed methods are shown through simulation study to perform well and to have several distinct advantages over existing methods.  相似文献   

15.
Merging information for semiparametric density estimation   总被引:1,自引:0,他引:1  
Summary.  The density ratio model specifies that the likelihood ratio of m −1 probability density functions with respect to the m th is of known parametric form without reference to any parametric model. We study the semiparametric inference problem that is related to the density ratio model by appealing to the methodology of empirical likelihood. The combined data from all the samples leads to more efficient kernel density estimators for the unknown distributions. We adopt variants of well-established techniques to choose the smoothing parameter for the density estimators proposed.  相似文献   

16.
Nonparametric density estimation in the presence of measurement error is considered. The usual kernel deconvolution estimator seeks to account for the contamination in the data by employing a modified kernel. In this paper a new approach based on a weighted kernel density estimator is proposed. Theoretical motivation is provided by the existence of a weight vector that perfectly counteracts the bias in density estimation without generating an excessive increase in variance. In practice a data driven method of weight selection is required. Our strategy is to minimize the discrepancy between a standard kernel estimate from the contaminated data on the one hand, and the convolution of the weighted deconvolution estimate with the measurement error density on the other hand. We consider a direct implementation of this approach, in which the weights are optimized subject to sum and non-negativity constraints, and a regularized version in which the objective function includes a ridge-type penalty. Numerical tests suggest that the weighted kernel estimation can lead to tangible improvements in performance over the usual kernel deconvolution estimator. Furthermore, weighted kernel estimates are free from the problem of negative estimation in the tails that can occur when using modified kernels. The weighted kernel approach generalizes to the case of multivariate deconvolution density estimation in a very straightforward manner.  相似文献   

17.
The joint probability density function, evaluated at the observed data, is commonly used as the likelihood function to compute maximum likelihood estimates. For some models, however, there exist paths in the parameter space along which this density-approximation likelihood goes to infinity and maximum likelihood estimation breaks down. In all applications, however, observed data are really discrete due to the round-off or grouping error of measurements. The “correct likelihood” based on interval censoring can eliminate the problem of an unbounded likelihood. This article categorizes the models leading to unbounded likelihoods into three groups and illustrates the density-approximation breakdown with specific examples. Although it is usually possible to infer how given data were rounded, when this is not possible, one must choose the width for interval censoring, so we study the effect of the round-off on estimation. We also give sufficient conditions for the joint density to provide the same maximum likelihood estimate as the correct likelihood, as the round-off error goes to zero.  相似文献   

18.
This paper addresses the problem of estimating the mode of a density function based on contaminated data. Unlike conventional methods, which are based on localizing the maximum of a density estimator, we introduce a procedure which requires computation of the maximum among finitely many quantities only. We show that our estimator is strongly consistent under very weak conditions, where not even continuity of the density at the mode is required; moreover, we show that the estimator achieves optimal convergence rates under common smoothness and sharpness constraints. Some numerical simulations are provided.  相似文献   

19.
ABSTRACT

The conditional density offers the most informative summary of the relationship between explanatory and response variables. We need to estimate it in place of the simple conditional mean when its shape is not well-behaved. A motivation for estimating conditional densities, specific to the circular setting, lies in the fact that a natural alternative of it, like quantile regression, could be considered problematic because circular quantiles are not rotationally equivariant. We treat conditional density estimation as a local polynomial fitting problem as proposed by Fan et al. [Estimation of conditional densities and sensitivity measures in nonlinear dynamical systems. Biometrika. 1996;83:189–206] in the Euclidean setting, and discuss a class of estimators in the cases when the conditioning variable is either circular or linear. Asymptotic properties for some members of the proposed class are derived. The effectiveness of the methods for finite sample sizes is illustrated by simulation experiments and an example using real data.  相似文献   

20.
Problems with truncated data arise frequently in survival analyses and reliability applications. The estimation of the density function of the lifetimes is often of interest. In this article, the estimation of density function by the kernel method is considered, when truncated data are showing some kind of dependence. We apply the strong Gaussian approximation technique to study the strong uniform consistency for kernel estimators of the density function under a truncated dependent model. We also apply the strong approximation results to study the integrated square error properties of the kernel density estimators under the truncated dependent scheme.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号