首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We consider asymmetric kernel estimates based on grouped data. We propose an iterated scheme for constructing such an estimator and apply an iterated smoothed bootstrap approach for bandwidth selection. We compare our approach with competing methods in estimating actuarial loss models using both simulations and data studies. The simulation results show that with this new method, the estimated density from grouped data matches the true density more closely than with competing approaches.  相似文献   

2.
We compare minimum Hellinger distance and minimum Heiiinger disparity estimates for U-shaped beta distributions. Given suitable density estimates, both methods are known to be asymptotically efficient when the data come from the assumed model family, and robust to small perturbations from the model family. Most implementations use kernel density estimates, which may not be appropriate for U-shaped distributions. We compare fixed binwidth histograms, percentile mesh histograms, and averaged shifted histograms. Minimum disparity estimates are less sensitive to the choice of density estimate than are minimum distance estimates, and the percentile mesh histogram gives the best results for both minimum distance and minimum disparity estimates. Minimum distance estimates are biased and a bias-corrected method is proposed. Minimum disparity estimates and bias-corrected minimum distance estimates are comparable to maximum likelihood estimates when the model holds, and give better results than either method of moments or maximum likelihood when the data are discretized or contaminated, Although our re¬sults are for the beta density, the implementations are easily modified for other U-shaped distributions such as the Dirkhlet or normal generated distribution.  相似文献   

3.
A new procedure is proposed for deriving variable bandwidths in univariate kernel density estimation, based upon likelihood cross-validation and an analysis of a Bayesian graphical model. The procedure admits bandwidth selection which is flexible in terms of the amount of smoothing required. In addition, the basic model can be extended to incorporate local smoothing of the density estimate. The method is shown to perform well in both theoretical and practical situations, and we compare our method with those of Abramson (The Annals of Statistics 10: 1217–1223) and Sain and Scott (Journal of the American Statistical Association 91: 1525–1534). In particular, we note that in certain cases, the Sain and Scott method performs poorly even with relatively large sample sizes.We compare various bandwidth selection methods using standard mean integrated square error criteria to assess the quality of the density estimates. We study situations where the underlying density is assumed both known and unknown, and note that in practice, our method performs well when sample sizes are small. In addition, we also apply the methods to real data, and again we believe our methods perform at least as well as existing methods.  相似文献   

4.
Constructing spatial density maps of seismic events, such as earthquake hypocentres, is complicated by the fact that events are not located precisely. In this paper, we present a method for estimating density maps from event locations that are measured with error. The estimator is based on the simulation–extrapolation method of estimation and is appropriate for location errors that are either homoscedastic or heteroscedastic. A simulation study shows that the estimator outperforms the standard estimator of density that ignores location errors in the data, even when location errors are spatially dependent. We apply our method to construct an estimated density map of earthquake hypocenters using data from the Alaska earthquake catalogue.  相似文献   

5.
This article introduces a fast cross-validation algorithm that performs wavelet shrinkage on data sets of arbitrary size and irregular design and also simultaneously selects good values of the primary resolution and number of vanishing moments.We demonstrate the utility of our method by suggesting alternative estimates of the conditional mean of the well-known Ethanol data set. Our alternative estimates outperform the Kovac-Silverman method with a global variance estimate by 25% because of the careful selection of number of vanishing moments and primary resolution. Our alternative estimates are simpler than, and competitive with, results based on the Kovac-Silverman algorithm equipped with a local variance estimate.We include a detailed simulation study that illustrates how our cross-validation method successfully picks good values of the primary resolution and number of vanishing moments for unknown functions based on Walsh functions (to test the response to changing primary resolution) and piecewise polynomials with zero or one derivative (to test the response to function smoothness).  相似文献   

6.
We consider the problem of density estimation when the data is in the form of a continuous stream with no fixed length. In this setting, implementations of the usual methods of density estimation such as kernel density estimation are problematic. We propose a method of density estimation for massive datasets that is based upon taking the derivative of a smooth curve that has been fit through a set of quantile estimates. To achieve this, a low-storage, single-pass, sequential method is proposed for simultaneous estimation of multiple quantiles for massive datasets that form the basis of this method of density estimation. For comparison, we also consider a sequential kernel density estimator. The proposed methods are shown through simulation study to perform well and to have several distinct advantages over existing methods.  相似文献   

7.
We describe an image reconstruction problem and the computational difficulties arising in determining the maximum a posteriori (MAP) estimate. Two algorithms for tackling the problem, iterated conditional modes (ICM) and simulated annealing, are usually applied pixel by pixel. The performance of this strategy can be poor, particularly for heavily degraded images, and as a potential improvement Jubb and Jennison (1991) suggest the cascade algorithm in which ICM is initially applied to coarser images formed by blocking squares of pixels. In this paper we attempt to resolve certain criticisms of cascade and present a version of the algorithm extended in definition and implementation. As an illustration we apply our new method to a synthetic aperture radar (SAR) image. We also carry out a study of simulated annealing, with and without cascade, applied to a more tractable minimization problem from which we gain insight into the properties of cascade algorithms.  相似文献   

8.
The maximum likelihood and Bayesian approaches for parameter estimations and prediction of future record values have been considered for the two-parameter Burr Type XII distribution based on record values with the number of trials following the record values (inter-record times). Firstly, the Bayes estimates are obtained based on a joint bivariate prior for the shape parameters. In this case, the Bayes estimates of the parameters have been developed by using Lindley's approximation and the Markov Chain Monte Carlo (MCMC) method due to the lack of explicit forms under the squared error and the linear-exponential loss functions. The MCMC method has been also used to construct the highest posterior density credible intervals. Secondly, the Bayes estimates are obtained with respect to a discrete prior for the first shape parameter and a conjugate prior for other shape parameter. The Bayes and the maximum likelihood estimates are compared in terms of the estimated risk by the Monte Carlo simulations. We further consider the non-Bayesian and Bayesian prediction for future lower record arising from the Burr Type XII distribution based on record data. The comparison of the derived predictors is carried out by using Monte Carlo simulations. A real data are analysed for illustration purposes.  相似文献   

9.
In this paper, we propose a model for image segmentation based on a finite mixture of Gaussian distributions. For each pixel of the image, prior probabilities of class memberships are specified through a Gibbs distribution, where association between labels of adjacent pixels is modeled by a class-specific term allowing for different interaction strengths across classes. We show how model parameters can be estimated in a maximum likelihood framework using Mean Field theory. Experimental performance on perturbed phantom and on real benchmark images shows that the proposed method performs well in a wide variety of empirical situations.  相似文献   

10.
This paper considers the statistical analysis for competing risks model under the Type-I progressively hybrid censoring from a Weibull distribution. We derive the maximum likelihood estimates and the approximate maximum likelihood estimates of the unknown parameters. We then use the bootstrap method to construct the confidence intervals. Based on the non informative prior, a sampling algorithm using the acceptance–rejection sampling method is presented to obtain the Bayes estimates, and Monte Carlo method is employed to construct the highest posterior density credible intervals. The simulation results are provided to show the effectiveness of all the methods discussed here and one data set is analyzed.  相似文献   

11.
In this paper we consider the problems of estimation and prediction when observed data from a lognormal distribution are based on lower record values and lower record values with inter-record times. We compute maximum likelihood estimates and asymptotic confidence intervals for model parameters. We also obtain Bayes estimates and the highest posterior density (HPD) intervals using noninformative and informative priors under square error and LINEX loss functions. Furthermore, for the problem of Bayesian prediction under one-sample and two-sample framework, we obtain predictive estimates and the associated predictive equal-tail and HPD intervals. Finally for illustration purpose a real data set is analyzed and simulation study is conducted to compare the methods of estimation and prediction.  相似文献   

12.
Nonparametric density estimation in the presence of measurement error is considered. The usual kernel deconvolution estimator seeks to account for the contamination in the data by employing a modified kernel. In this paper a new approach based on a weighted kernel density estimator is proposed. Theoretical motivation is provided by the existence of a weight vector that perfectly counteracts the bias in density estimation without generating an excessive increase in variance. In practice a data driven method of weight selection is required. Our strategy is to minimize the discrepancy between a standard kernel estimate from the contaminated data on the one hand, and the convolution of the weighted deconvolution estimate with the measurement error density on the other hand. We consider a direct implementation of this approach, in which the weights are optimized subject to sum and non-negativity constraints, and a regularized version in which the objective function includes a ridge-type penalty. Numerical tests suggest that the weighted kernel estimation can lead to tangible improvements in performance over the usual kernel deconvolution estimator. Furthermore, weighted kernel estimates are free from the problem of negative estimation in the tails that can occur when using modified kernels. The weighted kernel approach generalizes to the case of multivariate deconvolution density estimation in a very straightforward manner.  相似文献   

13.
In this paper, we describe some results of an ESPRIT project known as StatLog whose purpose is the comparison of classification algorithms. We give a brief summary of some of the algorithms in the project: discriminant analysis; nearest neighbours; decision trees; neural net methods; SMART; kernel methods and other Bayesian approaches.We focus on data sets derived from images, ranging from raw pixel data to features and summaries extracted from such data.  相似文献   

14.
In this paper, we describe some results of an ESPRIT project known as StatLog whose purpose is the comparison of classification algorithms. We give a brief summary of some of the algorithms in the project: discriminant analysis; nearest neighbours; decision trees; neural net methods; SMART; kernel methods and other Bayesian approaches.We focus on data sets derived from images, ranging from raw pixel data to features and summaries extracted from such data.  相似文献   

15.
Statistical image restoration techniques are oriented mainly toward modelling the image degradation process in order to recover the original image. This usually involves formulating a criterion function that will yield some optimal estimate of the desired image. Often these techniques assume that the point spread function is known when the image is restored and indeed when we estimate the smoothing parameter. However in practice this assumption may not hold. This paper investigates empirically the effect of mis-specifying the point spread function on some data-based estimates of the regularization parameter and hence on the image reconstructions. Comparisons of image reconstruction quality are based on the mean absolute difference in pixel intensities between the true and reconstructed images.  相似文献   

16.
We consider the problem of making inferences about extreme values from a sample. The underlying model distribution is the generalized extreme-value (GEV) distribution, and our interest is in estimating the parameters and quantiles of the distribution robustly. In doing this we find estimates for the GEV parameters based on that part of the data which is well fitted by a GEV distribution. The robust procedure will assign weights between 0 and 1 to each data point. A weight near 0 indicates that the data point is not well modelled by the GEV distribution which fits the points with weights at or near 1. On the basis of these weights we are able to assess the validity of a GEV model for our data. It is important that the observations with low weights be carefully assessed to determine whether diey are valid observations or not. If they are, we must examine whether our data could be generated by a mixture of GEV distributions or whether some other process is involved in generating the data. This process will require careful consideration of die subject matter area which led to the data. The robust estimation techniques are based on optimal B-robust estimates. Their performance is compared to the probability-weighted moment estimates of Hosking et al. (1985) in both simulated and real data.  相似文献   

17.
In this paper, we consider the problem of making statistical inference for a truncated normal distribution under progressive type I interval censoring. We obtain maximum likelihood estimators of unknown parameters using the expectation-maximization algorithm and in sequel, we also compute corresponding midpoint estimates of parameters. Estimation based on the probability plot method is also considered. Asymptotic confidence intervals of unknown parameters are constructed based on the observed Fisher information matrix. We obtain Bayes estimators of parameters with respect to informative and non-informative prior distributions under squared error and linex loss functions. We compute these estimates using the importance sampling procedure. The highest posterior density intervals of unknown parameters are constructed as well. We present a Monte Carlo simulation study to compare the performance of proposed point and interval estimators. Analysis of a real data set is also performed for illustration purposes. Finally, inspection times and optimal censoring plans based on the expected Fisher information matrix are discussed.  相似文献   

18.
Computation of normalizing constants is a fundamental mathematical problem in various disciplines, particularly in Bayesian model selection problems. A sampling-based technique known as bridge sampling (Meng and Wong in Stat Sin 6(4):831–860, 1996) has been found to produce accurate estimates of normalizing constants and is shown to possess good asymptotic properties. For small to moderate sample sizes (as in situations with limited computational resources), we demonstrate that the (optimal) bridge sampler produces biased estimates. Specifically, when one density (we denote as $$p_2$$) is constructed to be close to the target density (we denote as $$p_1$$) using method of moments, our simulation-based results indicate that the correlation-induced bias through the moment-matching procedure is non-negligible. More crucially, the bias amplifies as the dimensionality of the problem increases. Thus, a series of theoretical as well as empirical investigations is carried out to identify the nature and origin of the bias. We then examine the effect of sample size allocation on the accuracy of bridge sampling estimates and discovered that one possibility of reducing both the bias and standard error with a small increase in computational effort is by drawing extra samples from the moment-matched density $$p_2$$ (which we assume easy to sample from), provided that the evaluation of $$p_1$$ is not too expensive. We proceed to show how the simple adaptive approach we termed “splitting” manages to alleviate the correlation-induced bias at the expense of a higher standard error, irrespective of the dimensionality involved. We also slightly modified the strategy suggested by Wang et al. (Warp bridge sampling: the next generation, Preprint, 2019. arXiv:1609.07690) to address the issue of the increase in standard error due to splitting, which is later generalized to further improve the efficiency. We conclude the paper by offering our insights of the application of a combination of these adaptive methods to improve the accuracy of bridge sampling estimates in Bayesian applications (where posterior samples are typically expensive to generate) based on the preceding investigations, with an application to a practical example.  相似文献   

19.
We present a local density estimator based on first-order statistics. To estimate the density at a point, x, the original sample is divided into subsets and the average minimum sample distance to x over all such subsets is used to define the density estimate at x. The tuning parameter is thus the number of subsets instead of the typical bandwidth of kernel or histogram-based density estimators. The proposed method is similar to nearest-neighbor density estimators but it provides smoother estimates. We derive the asymptotic distribution of this minimum sample distance statistic to study globally optimal values for the number and size of the subsets. Simulations are used to illustrate and compare the convergence properties of the estimator. The results show that the method provides good estimates of a wide variety of densities without changes of the tuning parameter, and that it offers competitive convergence performance.  相似文献   

20.
In this article, the Bayes estimates of two-parameter gamma distribution are considered. It is well known that the Bayes estimators of the two-parameter gamma distribution do not have compact form. In this paper, it is assumed that the scale parameter has a gamma prior and the shape parameter has any log-concave prior, and they are independently distributed. Under the above priors, we use Gibbs sampling technique to generate samples from the posterior density function. Based on the generated samples, we can compute the Bayes estimates of the unknown parameters and can also construct HPD credible intervals. We also compute the approximate Bayes estimates using Lindley's approximation under the assumption of gamma priors of the shape parameter. Monte Carlo simulations are performed to compare the performances of the Bayes estimators with the classical estimators. One data analysis is performed for illustrative purposes. We further discuss the Bayesian prediction of future observation based on the observed sample and it is seen that the Gibbs sampling technique can be used quite effectively for estimating the posterior predictive density and also for constructing predictive intervals of the order statistics from the future sample.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号