首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
Holonomic function theory has been successfully implemented in a series of recent papers to efficiently calculate the normalizing constant and perform likelihood estimation for the Fisher–Bingham distributions. A key ingredient for establishing the standard holonomic gradient algorithms is the calculation of the Pfaffian equations. So far, these papers either calculate these symbolically or apply certain methods to simplify this process. Here we show the explicit form of the Pfaffian equations using the expressions from Laplace inversion methods. This improves on the implementation of the holonomic algorithms for these problems and enables their adjustments for the degenerate cases. As a result, an exact and more dimensionally efficient ODE is implemented for likelihood inference.  相似文献   

2.
In 2008, Marsan and Lengliné presented a nonparametric way to estimate the triggering function of a Hawkes process. Their method requires an iterative and computationally intensive procedure which ultimately produces only approximate maximum likelihood estimates (MLEs) whose asymptotic properties are poorly understood. Here, we note a mathematical curiosity that allows one to compute, directly and extremely rapidly, exact MLEs of the nonparametric triggering function. The method here requires that the number q of intervals on which the nonparametric estimate is sought equals the number n of observed points. The resulting estimates have very high variance but may be smoothed to form more stable estimates. The performance and computational efficiency of the proposed method is verified in two disparate, highly challenging simulation scenarios: first to estimate the triggering functions, with simulation-based 95% confidence bands, for earthquakes and their aftershocks in Loma Prieta, California, and second, to characterise triggering in confirmed cases of plague in the United States over the last century. In both cases, the proposed estimator can be used to describe the rate of contagion of the processes in detail, and the computational efficiency of the estimator facilitates the construction of simulation-based confidence intervals.  相似文献   

3.
In this paper, we apply the empirical likelihood method to heteroscedastic partially linear errors-in-variables model. For the cases of known and unknown error variances, the two different empirical log-likelihood ratios for the parameter of interest are constructed. If the error variances are known, the empirical log-likelihood ratio is proved to be asymptotic chi-square distribution under the assumption that the errors are given by a sequence of stationary α-mixing random variables. Furthermore, if the error variances are unknown, we show that the proposed statistic is asymptotically standard chi-square distribution when the errors are independent. Simulations are carried out to assess the performance of the proposed method.  相似文献   

4.
Generalized Hyperbolic distribution (Barndorff-Nielsen 1977) is a variance-mean mixture of a normal distribution with the Generalized Inverse Gaussian distribution. Recently subclasses of these distributions (e.g., the hyperbolic distribution and the Normal Inverse Gaussian distribution) have been applied to construct stochastic processes in turbulence and particularly in finance, where multidimensional problems are of special interest. Parameter estimation for these distributions based on an i.i.d. sample is a difficult task even for a specified one-dimensional subclass (subclass being uniquely defined by ) and relies on numerical methods. For the hyperbolic subclass ( = 1), computer program hyp (Blæsild and Sørensen 1992) estimates parameters via ML when the dimensionality is less than or equal to three. To the best of the author's knowledge, no successful attempts have been made to fit any given subclass when the dimensionality is greater than three. This article proposes a simple EM-based (Dempster, Laird and Rubin 1977) ML estimation procedure to estimate parameters of the distribution when the subclass is known regardless of the dimensionality. Our method relies on the ability to numerically evaluate modified Bessel functions of the third kind and their logarithms, which is made possible by currently available software. The method is applied to fit the five dimensional Normal Inverse Gaussian distribution to a series of returns on foreign exchange rates.  相似文献   

5.
The Birnbaum–Saunders (BS) distribution is a positively skewed distribution and is a common model for analysing lifetime data. In this paper, we discuss the existence and uniqueness of the maximum likelihood estimates (MLEs) of the parameters of BS distribution based on Type-I, Type-II and hybrid censored samples. The line of proof is based on the monotonicity property of the likelihood function. We then describe the numerical iterative procedure for determining the MLEs of the parameters, and point out briefly some recently developed simple methods of estimation in the case of Type-II censoring. Some graphical illustrations of the approach are given for three real data from the reliability literature. Finally, for illustrative purpose, we also present an example in which the MLEs do not exist.  相似文献   

6.
7.
Nuisance parameter elimination is a central problem in capture–recapture modelling. In this paper, we consider a closed population capture–recapture model which assumes the capture probabilities varies only with the sampling occasions. In this model, the capture probabilities are regarded as nuisance parameters and the unknown number of individuals is the parameter of interest. In order to eliminate the nuisance parameters, the likelihood function is integrated with respect to a weight function (uniform and Jeffrey's) of the nuisance parameters resulting in an integrated likelihood function depending only on the population size. For these integrated likelihood functions, analytical expressions for the maximum likelihood estimates are obtained and it is proved that they are always finite and unique. Variance estimates of the proposed estimators are obtained via a parametric bootstrap resampling procedure. The proposed methods are illustrated on a real data set and their frequentist properties are assessed by means of a simulation study.  相似文献   

8.
We consider a sequence of contingency tables whose cell probabilities may vary randomly. The distribution of cell probabilities is modelled by a Dirichlet distribution. Bayes and empirical Bayes estimates of the log odds ratio are obtained. Emphasis is placed on estimating the risks associated with the Bayes, empirical Bayes and maximum lilkelihood estimates of the log odds ratio.  相似文献   

9.
We reveal that the minimum Anderson–Darling (MAD) estimator is a variant of the maximum likelihood method. Furthermore, it is shown that the MAD estimator offers excellent opportunities for parameter estimation if there is no explicit formulation for the distribution model. The computation time for the MAD estimator with approximated cumulative distribution function is much shorter than that of the classical maximum likelihood method with approximated probability density function. Additionally, we research the performance of the MAD estimator for the generalized Pareto distribution and demonstrate a further advantage of the MAD estimator with an issue of seismic hazard analysis.  相似文献   

10.
For a given parametric probability model, we consider the risk of the maximum likelihood estimator with respect to α-divergence, which includes the special cases of Kullback–Leibler divergence, the Hellinger distance, and essentially χ2-divergence. The asymptotic expansion of the risk is given with respect to sample sizes up to order n? 2. Each term in the expansion is expressed with the geometrical properties of the Riemannian manifold formed by the parametric probability model.  相似文献   

11.
An alternative to the maximum likelihood (ML) method, the maximum spacing (MSP) method, is introduced in Cheng and Amin [1983. Estimating parameters in continuous univariate distributions with a shifted origin. J. Roy. Statist. Soc. Ser. B 45, 394–403], and independently in Ranneby [1984. The maximum spacing method. An estimation method related to the maximum likelihood method. Scand. J. Statist. 11, 93–112]. The method, as described by Ranneby [1984. The maximum spacing method. An estimation method related to the maximum likelihood method. Scand. J. Statist. 11, 93–112], is derived from an approximation of the Kullback–Leibler divergence. Since the introduction of the MSP method, several closely related methods have been suggested. This article is a survey of such methods based on spacings and the Kullback–Leibler divergence. These estimation methods possess good properties and they work in situations where the ML method does not. Important issues such as the handling of ties and incomplete data are discussed, and it is argued that by using Moran's [1951. The random division of an interval—Part II. J. Roy. Statist. Soc. Ser. B 13, 147–150] statistic, on which the MSP method is based, we can effectively combine: (a) a test on whether an assigned model of distribution functions is correct or not, (b) an asymptotically efficient estimation of an unknown parameter θ0θ0, and (c) a computation of a confidence region for θ0θ0.  相似文献   

12.
It is assumed that the logs of the time to failure in a life test follow a normal distribution. If the test is terminated after r of a sample of n items fail, the test is said to be censored. If the sample size is small and censoring severe, the usual maximum likelihood estimator of a is downwardly biased. Monte Carlo techniques and regression analysis were used to develop an empirical correction factor. Applying the correction factor to the maximum likelihood estimator yields an unbiased estimate of σ.  相似文献   

13.
This paper describes the results of simulation studies on a class of nonparametric tests,T 0, developed for the hypothesis of homogeneityH 0, of several multivariate populations, and also on the class of testsT 1, developed subsequently for the hypothesisH 1, of parallelism of population profiles. Simulation studies relate to, first, the investigation of accuracy of the large-sample null-approximation for finite samples and, secondly, the study of powers under various types of alternatives to H 0 and H 1.  相似文献   

14.
This work characterizes the dispersion of some popular random probability measures, including the bootstrap, the Bayesian bootstrap, and the Pólya tree prior. This dispersion is measured in terms of the variation of the Kullback–Leibler divergence of a random draw from the process to that of its baseline centring measure. By providing a quantitative expression of this dispersion around the baseline distribution, our work provides insight for comparing different parameterizations of the models and for the setting of prior parameters in applied Bayesian settings. This highlights some limitations of the existing canonical choice of parameter settings in the Pólya tree process.  相似文献   

15.
Non-normality and heteroscedasticity are common in applications. For the comparison of two samples in the non-parametric Behrens–Fisher problem, different tests have been proposed, but no single test can be recommended for all situations. Here, we propose combining two tests, the Welch t test based on ranks and the Brunner–Munzel test, within a maximum test. Simulation studies indicate that this maximum test, performed as a permutation test, controls the type I error rate and stabilizes the power. That is, it has good power characteristics for a variety of distributions, and also for unbalanced sample sizes. Compared to the single tests, the maximum test shows acceptable type I error control.  相似文献   

16.
This paper proposes the use of the Bernstein–Dirichlet process prior for a new nonparametric approach to estimating the link function in the single-index model (SIM). The Bernstein–Dirichlet process prior has so far mainly been used for nonparametric density estimation. Here we modify this approach to allow for an approximation of the unknown link function. Instead of the usual Gaussian distribution, the error term is assumed to be asymmetric Laplace distributed which increases the flexibility and robustness of the SIM. To automatically identify truly active predictors, spike-and-slab priors are used for Bayesian variable selection. Posterior computations are performed via a Metropolis-Hastings-within-Gibbs sampler using a truncation-based algorithm for stick-breaking priors. We compare the efficiency of the proposed approach with well-established techniques in an extensive simulation study and illustrate its practical performance by an application to nonparametric modelling of the power consumption in a sewage treatment plant.  相似文献   

17.
18.
Maximization of an auto-Gaussian log-likelihood function when spatial autocorrelation is present requires numerical evaluation of an n?×?n matrix determinant. Griffith and Sone proposed a solution to this problem. This article simplifies and then evaluates an alternative approximation that can also be used with massively large georeferenced data sets based upon a regular square tessellation; this makes it particularly relevant to remotely sensed image analysis. Estimation results reported for five data sets found in the literature confirm the utility of this newer approximation.  相似文献   

19.
Consider a sequence x ≡ (x1,…, xn) of n independent observations, in which each observation xi is known to be a realization from either one of ki given populations, chosen among k (≥ ki) populations π1, …, πk Our main objective is to study the problem of the selection of the most reliable population πj at a fixed time ξ, when no assumptions about the k populations are made. Some numerical examples are presented.  相似文献   

20.
The standard tensile test is one of the most frequent tools performed for the evaluation of mechanical properties of metals. An empirical model proposed by Ramberg and Osgood fits the tensile test data using a nonlinear model for the strain in terms of the stress. It is an Error-In-Variables (EIV) model because of the uncertainty affecting both strain and stress measurement instruments. The SIMEX, a simulation-based method for the estimation of model parameters, is powerful in order to reduce bias due to the measurement error in EIV models. The plan of this article is the following. In Sec. 2, we introduce the Ramberg–Osgood model and another reparametrization according to different assumptions on the independent variable. In Sec. 3, there is a summary of SIMEX method for the case at hand. Section 4 is a comparison between SIMEX and others estimating methods in order to highlight the peculiarities of the different approaches. In the last section, there are some concluding remarks.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号