首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
A new method for estimating the proportion of null effects is proposed for solving large-scale multiple comparison problems. It utilises maximum likelihood estimation of nonparametric mixtures, which also provides a density estimate of the test statistics. It overcomes the problem of the usual nonparametric maximum likelihood estimator that cannot produce a positive probability at the location of null effects in the process of estimating nonparametrically a mixing distribution. The profile likelihood is further used to help produce a range of null proportion values, corresponding to which the density estimates are all consistent. With a proper choice of a threshold function on the profile likelihood ratio, the upper endpoint of this range can be shown to be a consistent estimator of the null proportion. Numerical studies show that the proposed method has an apparently convergent trend in all cases studied and performs favourably when compared with existing methods in the literature.  相似文献   

2.
Approximation of a density by another density is considered in the case of different dimensionalities of the distributions. The results have been derived by inverting expansions of characteristic functions with the help of matrix techniques. The approximations obtained are all functions of cumulant differences and derivatives of the approximating density. The multivariate Edgeworth expansion follows from the results as a special case. Furthermore, the density functions of the trace and eigenvalues of the sample covariance matrix are approximated by the multivariate normal density and a numerical example is given  相似文献   

3.
Summary.  We describe quantum tomography as an inverse statistical problem in which the quantum state of a light beam is the unknown parameter and the data are given by results of measurements performed on identical quantum systems. The state can be represented as an infinite dimensional density matrix or equivalently as a density on the plane called the Wigner function. We present consistency results for pattern function projection estimators and for sieve maximum likelihood estimators for both the density matrix of the quantum state and its Wigner function. We illustrate the performance of the estimators on simulated data. An EM algorithm is proposed for practical implementation. There remain many open problems, e.g. rates of convergence, adaptation and studying other estimators; a main purpose of the paper is to bring these to the attention of the statistical community.  相似文献   

4.
The circulant embedding method for generating statistically exact simulations of time series from certain Gaussian distributed stationary processes is attractive because of its advantage in computational speed over a competitive method based upon the modified Cholesky decomposition. We demonstrate that the circulant embedding method can be used to generate simulations from stationary processes whose spectral density functions are dictated by a number of popular nonparametric estimators, including all direct spectral estimators (a special case being the periodogram), certain lag window spectral estimators, all forms of Welch's overlapped segment averaging spectral estimator and all basic multitaper spectral estimators. One application for this technique is to generate time series for bootstrapping various statistics. When used with bootstrapping, our proposed technique avoids some – but not all – of the pitfalls of previously proposed frequency domain methods for simulating time series.  相似文献   

5.
Although estimating the five parameters of an unknown Generalized Normal Laplace (GNL) density by minimizing the distance between the empirical and true characteristic functions seems appealing, the approach cannot be advocated in practice. This conclusion is based on extensive numerical simulations in which a fast minimization procedure delivers deceiving estimators with values that are quite far away from the truth. These findings can be predicted by the very large values obtained for the true asymptotic variances of the estimators of the five parameters of the true GNL density.  相似文献   

6.
In this paper the independence between a block of natural parameters and the complementary block of mean value parameters holding for densities which are natural conjugate to some regular exponential families is used to design in a convenient way a Gibbs' sampler with block updates. Even when the densities of interest are obtained by conditioning to zero a block of natural parameters in a density conjugate to a larger "saturated" model, the updates require only the computation of marginal distributions under the "unconditional" density. For exponential families which are closed under marginalization, including both the zero mean Gaussian family and the cross-classified Bernoulli family such an implementation of the Gibbs' sampler can be seen as an Iterative Proportional Fitting algorithm with random inputs.  相似文献   

7.
This paper considers linear and nonlinear regression with a response variable that is allowed to be “missing at random”. The only structural assumptions on the distribution of the variables are that the errors have mean zero and are independent of the covariates. The independence assumption is important. It enables us to construct an estimator for the response density that uses all the observed data, in contrast to the usual local smoothing techniques, and which therefore permits a faster rate of convergence. The idea is to write the response density as a convolution integral which can be estimated by an empirical version, with a weighted residual-based kernel estimator plugged in for the error density. For an appropriate class of regression functions, and a suitably chosen bandwidth, this estimator is consistent and converges with the optimal parametric rate n1/2. Moreover, the estimator is proved to be efficient (in the sense of Hájek and Le Cam) if an efficient estimator is used for the regression parameter.  相似文献   

8.
This paper considers the Bayesian analysis of the multivariate normal distribution under a new and bounded loss function, based on a reflection of the multivariate normal density function. The Bayes estimators of the mean vector can be derived for an arbitrary prior distribution of [d]. When the covariance matrix has an inverted Wishart prior density, a Bayes estimator of[d] is obtained under a bounded loss function, based on the entropy loss. Finally the admissibility of all linear estimators c[d]+ d for the mean vector is considered  相似文献   

9.
Series evaluation of Tweedie exponential dispersion model densities   总被引:2,自引:0,他引:2  
Exponential dispersion models, which are linear exponential families with a dispersion parameter, are the prototype response distributions for generalized linear models. The Tweedie family comprises those exponential dispersion models with power mean-variance relationships. The normal, Poisson, gamma and inverse Gaussian distributions belong to theTweedie family. Apart from these special cases, Tweedie distributions do not have density functions which can be written in closed form. Instead, the densities can be represented as infinite summations derived from series expansions. This article describes how the series expansions can be summed in an numerically efficient fashion. The usefulness of the approach is demonstrated, but full machine accuracy is shown not to be obtainable using the series expansion method for all parameter values. Derivatives of the density with respect to the dispersion parameter are also derived to facilitate maximum likelihood estimation. The methods are demonstrated on two data examples and compared with with Box-Cox transformations and extended quasi-likelihoood.  相似文献   

10.
This paper is concerned with extreme value density estimation. The generalized Pareto distribution (GPD) beyond a given threshold is combined with a nonparametric estimation approach below the threshold. This semiparametric setup is shown to generalize a few existing approaches and enables density estimation over the complete sample space. Estimation is performed via the Bayesian paradigm, which helps identify model components. Estimation of all model parameters, including the threshold and higher quantiles, and prediction for future observations is provided. Simulation studies suggest a few useful guidelines to evaluate the relevance of the proposed procedures. They also provide empirical evidence about the improvement of the proposed methodology over existing approaches. Models are then applied to environmental data sets. The paper is concluded with a few directions for future work.  相似文献   

11.
This article considers a Bayesian hierarchical model for multiple comparisons in linear models where the population medians satisfy a simple order restriction. Representing the asymmetric Laplace distribution as a scale mixture of normals with an exponential mixing density and a continuous prior restricted to order constraints, a Gibbs sampling algorithm for parameter estimation and simultaneous comparison of treatment medians is proposed. Posterior probabilities of all possible hypotheses on the equality/inequality of treatment medians are estimated using Bayes factors that are computed via the Savage-Dickey density ratios. The performance of the proposed median-based model is investigated in the simulated and real datasets. The results show that the proposed method can outperform the commonly used method that is based on treatment means, when data are from nonnormal distributions.  相似文献   

12.
It is well known that if some observations in a sample from the probability density are not available, then in general the density cannot be estimated. A possible remedy is to use an auxiliary variable that explains the missing mechanism. For this setting a data-driven estimator is proposed that mimics performance of an oracle that knows all observations from the sample. It is also proved that the estimator adapts to unknown smoothness of the density and its mean integrated squared error converges with a minimax rate. A numerical study, together with the analysis of a real data, shows that the estimator is feasible for small samples.  相似文献   

13.
The bivariate normal density with unit variance and correlation ρ is well known. We show that by integrating out ρ, the result is a function of the maximum norm. The Bayesian interpretation of this result is that if we put a uniform prior over ρ, then the marginal bivariate density depends only on the maximal magnitude of the variables. The square-shaped isodensity contour of this resulting marginal bivariate density can also be regarded as the equally weighted mixture of bivariate normal distributions over all possible correlation coefficients. This density links to the Khintchine mixture method of generating random variables. We use this method to construct the higher dimensional generalizations of this distribution. We further show that for each dimension, there is a unique multivariate density that is a differentiable function of the maximum norm and is marginally normal, and the bivariate density from the integral over ρ is its special case in two dimensions.  相似文献   

14.
非参数密度估计在个体损失分布中的应用   总被引:9,自引:0,他引:9       下载免费PDF全文
谭英平 《统计研究》2003,20(8):40-5
一、前言所谓个体损失 ,就是每一次保险事故中的损失数额 ,对个体损失分布性状的研究是风险决策理论的重要内容。已有的关于个体损失分布的研究大多着眼于传统的参数统计方法 ,其基本流程为 :获取数据→拟合参数模型→估计模型参数→指出拟合效果 ,也就是说 ,对于损失总体分布性状的了解是建立在确定参数模型的基础上的。自然 ,估计模型参数的方法有很多 ,包括矩估计、极大似然估计、最小距离估计等 ,最终确定的参数模型对个体损失分布通常会有较好的描述 ,能够提供精度较高的分析结果。但在实际操作中 ,这一过程显得太过冗长 ,且对不同样本…  相似文献   

15.
For every discrete or continuous location-scale family having a square-integrable density, there is a unique continuous probability distribution on the unit interval that is determined by the density-quantile composition introduced by Parzen in 1979. These probability density quantiles (pdQs) only differ in shape, and can be usefully compared with the Hellinger distance or Kullback–Leibler divergences. Convergent empirical estimates of these pdQs are provided, which leads to a robust global fitting procedure of shape families to data. Asymmetry can be measured in terms of distance or divergence of pdQs from the symmetric class. Further, a precise classification of shapes by tail behaviour can be defined simply in terms of pdQ boundary derivatives.  相似文献   

16.
We introduce a class of spatial point processesinteracting neighbour point (INP) processes, where the density of the process can be written by means of local interactions between a point and subsets of its neighbourhood but where the processes may not be Ripley-Kelly Markov processes with respect to this neighbourhood. We show that the processes are iterated Markov processes defined by Hayat and Gubner (1996). Furthermore, we pay special attention to a subclass of interacting neighbour processes, where the density belongs to the exponential family and all neighbours of a point affect it simultaneously. A simulation study is presented to show that some simple processes of this subclass can produce clustered patterns of great variety. Finally, an empirical example is given.  相似文献   

17.
A sequence of nested hypotheses is presented for the examination of the assumption of autoregressive covariance structure in, for example, a repeated measures experiment. These hypotheses arise naturally by specifying the joint density of the underlying vector random variable as a product of conditional densities and the density of a subset of the vector random variable. The tests for all but one of the nested hypotheses are well known procedures, namely analysis of variance F-tests and Bartlett's test of equality of variances. While the procedure is based on tests of hypotheses, it may be viewed as an exploratory tool which can lead to model identification. An example is presented to illustrate the method.  相似文献   

18.
We consider a class of adaptive MCMC algorithms using a Langevin-type proposal density. We state and prove regularity conditions for the convergence of these algorithms. In addition to these theoretical results we introduce a number of methodological innovations that can be applied much more generally. We assess the performance of these algorithms with simulation studies, including an example of the statistical analysis of a point process driven by a latent log-Gaussian Cox process.  相似文献   

19.
In count data models, overdispersion of the dependent variable can be incorporated into the model if a heterogeneity term is added into the mean parameter of the Poisson distribution. We use a nonparametric estimation for the heterogeneity density based on a squared Kth-order polynomial expansion, that we generalize for panel data. A numerical illustration using an insurance dataset is discussed. Even if some statistical analyses showed no clear differences between these new models and the standard Poisson with gamma random effects, we show that the choice of the random effects distribution has a significant influence for interpreting our results.  相似文献   

20.
Grouped data are commonly encountered in applications. All data from a continuous population are grouped due to rounding of the individual observations. The Bernstein polynomial model is proposed as an approximate model in this paper for estimating a univariate density function based on grouped data. The coefficients of the Bernstein polynomial, as the mixture proportions of beta distributions, can be estimated using an EM algorithm. The optimal degree of the Bernstein polynomial can be determined using a change-point estimation method. The rate of convergence of the proposed density estimate to the true density is proved to be almost parametric by an acceptance–rejection argument used for generating random numbers. The proposed method is compared with some existing methods in a simulation study and is applied to the Chicken Embryo Data.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号