首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
2.
Two classes of semiparametric and nonparametric mixture models are defined to represent general kinds of prior information. For these models the nonparametric maximum likelihood estimator (NPMLE) of an unknown probability distribution is derived and is shown to be consistent and relative efficient. Linear functionals are used for the estimation of parameters. Their consistency is proved, the gain of efficiency is derived and asymptotical distributions are given.  相似文献   

3.
非参数可加ACD模型对条件期望的函数形式与随机误差项的分布形式要求都没有参数ACD模型强,因此不会像参数ACD模型那样因模型形式设定错误而得出错误结论。非参数可加ACD模型估计出来的各个可加部分图形的形状对于正确设定参数ACD模型具有一定的指导作用。  相似文献   

4.
In this paper, we introduce a test for uniformity and use it as the second stage of an exact goodness-of-fit test of exponentiality. By simulation, the powers of the proposed test under various alternatives are compared with exponentiality test based on Kullback–Leibler information proposed by Ebrahimi et al. [N. Ebrahimi, M. Habibullah, and E.S. Soofi, Testing exponentiality based on Kullback–Leiber information, J. R. Statist. Soc. Ser. B 54 (1992), pp. 739–748]. The results are impressive, i.e. the proposed test has higher power than the test based on entropy.  相似文献   

5.
6.
The objective of this paper is to present a method which can accommodate certain types of missing data by using the quasi-likelihood function for the complete data. This method can be useful when we can make first and second moment assumptions only; in addition, it can be helpful when the EM algorithm applied to the actual likelihood becomes overly complicated. First we derive a loss function for the observed data using an exponential family density which has the same mean and variance structure of the complete data. This loss function is the counterpart of the quasi-deviance for the observed data. Then the loss function is minimized using the EM algorithm. The use of the EM algorithm guarantees a decrease in the loss function at every iteration. When the observed data can be expressed as a deterministic linear transformation of the complete data, or when data are missing completely at random, the proposed method yields consistent estimators. Examples are given for overdispersed polytomous data, linear random effects models, and linear regression with missing covariates. Simulation results for the linear regression model with missing covariates show that the proposed estimates are more efficient than estimates based on completely observed units, even when outcomes are bimodal or skewed.  相似文献   

7.
In earlier work, Kirchner [An estimation procedure for the Hawkes process. Quant Financ. 2017;17(4):571–595], we introduced a nonparametric estimation method for the Hawkes point process. In this paper, we present a simulation study that compares this specific nonparametric method to maximum-likelihood estimation. We find that the standard deviations of both estimation methods decrease as power-laws in the sample size. Moreover, the standard deviations are proportional. For example, for a specific Hawkes model, the standard deviation of the branching coefficient estimate is roughly 20% larger than for MLE – over all sample sizes considered. This factor becomes smaller when the true underlying branching coefficient becomes larger. In terms of runtime, our method clearly outperforms MLE. The present bias of our method can be well explained and controlled. As an incidental finding, we see that also MLE estimates seem to be significantly biased when the underlying Hawkes model is near criticality. This asks for a more rigorous analysis of the Hawkes likelihood and its optimization.  相似文献   

8.
Common kernel density estimators (KDE) are generalised, which involve that assumptions on the kernel of the distribution can be given. Instead of using metrics as input to the kernels, the new estimators use parameterisable pseudometrics. In general, the volumes of the balls in pseudometric spaces are dependent on both the radius and the location of the centre. To enable constant smoothing, the volumes of the balls need to be calculated and analytical expressions are preferred for computational reasons. Two suitable parametric families of pseudometrics are identified. One of them has common KDE as special cases. In a few experiments, the proposed estimators show increased statistical power when proper assumptions are made. As a consequence, this paper describes an approach, where partial knowledge about the distribution can be used effectively. Furthermore, it is suggested that the new estimators are adequate for statistical learning algorithms such as regression and classification.  相似文献   

9.
This paper presents an asymptotic equivalence result with a sharp rate of convergence forthe sample median and the Harrell-Davis median estimator. The consequences of this result are discussed.  相似文献   

10.
文章利用中国证券市场的日内交易数据实证了非参数ACD模型。非参数ACD模型不依赖条件均值的函数形式和误差项的分布形式,更具有一般意义。文章从多个方面进行实证分析。利用非参数方法进行分析的结果表明:数据不能用线性ACD模型来刻画,根据非参数拟合曲面的形状可以把此ACD模型的函数形式设定为某种非线性形式。  相似文献   

11.
A new fast algorithm for computing the nonparametric maximum likelihood estimate of a univariate log‐concave density is proposed and studied. It is an extension of the constrained Newton method for nonparametric mixture estimation. In each iteration, the newly extended algorithm includes, if necessary, new knots that are located via a special directional derivative function. The algorithm renews the changes of slope at all knots via a quadratically convergent method and removes the knots at which the changes of slope become zero. Theoretically, the characterisation of the nonparametric maximum likelihood estimate is studied and the algorithm is guaranteed to converge to the unique maximum likelihood estimate. Numerical studies show that it outperforms other algorithms that are available in the literature. Applications to some real‐world financial data are also given.  相似文献   

12.
In this article, we present a test for testing uniformity. Based on the test, we provide a test for testing exponentiality. Empirical critical values for both the tests are computed. Both the tests are compared with the tests proposed by Noughabi and Arghami [H. Alizadeh Noughabi, and N.R. Arghami, Testing exponentiality using transformed data, J. Statist. Comput. Simul. 81 (4) (2011), pp. 511–516] using simulation experiments for a wide class of alternatives. The tests possess attractive power properties.  相似文献   

13.
A set of Fortran-77 subroutines is described which compute a nonparametric density estimator expressed as a Fourier series. In addition, a subroutine is given for the estimation of a cumulative distribution. Performance measures are given based on samples from a Weibull distribution. Due to small size and modest space demands, these subroutines are easily implemented on most small computers.  相似文献   

14.
Iterative reweighting (IR) is a popular method for computing M-estimates of location and scatter in multivariate robust estimation. When the objective function comes from a scale mixture of normal distributions the iterative reweighting algorithm can be identified as an EM algorithm. The purpose of this paper is to show that in the special case of the multivariate t-distribution, substantial improvements to the convergence rate can be obtained by modifying the EM algorithm.  相似文献   

15.
In this paper, we propose and evaluate the performance of different parametric and nonparametric estimators for the population coefficient of variation considering Ranked Set Sampling (RSS) under normal distribution. The performance of the proposed estimators was assessed based on the bias and relative efficiency provided by a Monte Carlo simulation study. An application in anthropometric measurements data from a human population is also presented. The results showed that the proposed estimators via RSS present an expressively lower mean squared error when compared to the usual estimator, obtained via Simple Random Sampling. Also, it was verified the superiority of the maximum likelihood estimator, given the necessary assumptions of normality and perfect ranking are met.  相似文献   

16.
This research focuses on the estimation of tumor incidence rates from long-term animal studies which incorporate interim sacrifices. A nonparametric stochastic model is described with transition rates between states corresponding to the tumor incidence rate, the overall death rate, and the death rate for tumor-free animals. Exact analytic solutions for the maximum likelihood estimators of the hazard rates are presented, and their application to data from a long-term animal study is illustrated by an example. Unlike many common methods for estimation and comparison of tumor incidence rates among treatment groups, the estimators derived in this paper require no assumptions regarding tumor lethality or treatment lethality. The small sample operating characteristics of these estimators are evaluated using Monte Carlo simulation studies.  相似文献   

17.
In this paper a new method called the EMS algorithm is used to solve Wicksell's corpuscle problem, that is the determination of the distribution of the sphere radii in a medium given the radii of their profiles in a random slice. The EMS algorithm combines the EM algorithm, a procedure for obtaining maximum likelihood estimates of parameters from incomplete data, with simple smoothing. The method is tested on simulated data from three different sphere radii densities, namely a bimodal mixture of Normals, a Weibull and a Normal. The effect of varying the level of smoothing, the number of classes in which the data is binned and the number of classes for which the estimated density is evaluated, is investigated. Comparisons are made between these results and those obtained by others in this field.  相似文献   

18.
19.
The l2 error of linear wavelet estimator of a density from a random sample is shown to be asymptotically normal without imposing smoothing conditions on the density. As an application, the goodness-of-fit test and the approximation of its power for closeness alternatives are considered.  相似文献   

20.
A new density-based classification method that uses semiparametric mixtures is proposed. Like other density-based classifiers, it first estimates the probability density function for the observations in each class, with a semiparametric mixture, and then classifies a new observation by the highest posterior probability. By making a proper use of a multivariate nonparametric density estimator that has been developed recently, it is able to produce adaptively smooth and complicated decision boundaries in a high-dimensional space and can thus work well in such cases. Issues specific to classification are studied and discussed. Numerical studies using simulated and real-world data show that the new classifier performs very well as compared with other commonly used classification methods.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号