首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Two-step estimation for inhomogeneous spatial point processes   总被引:1,自引:0,他引:1  
Summary.  The paper is concerned with parameter estimation for inhomogeneous spatial point processes with a regression model for the intensity function and tractable second-order properties ( K -function). Regression parameters are estimated by using a Poisson likelihood score estimating function and in the second step minimum contrast estimation is applied for the residual clustering parameters. Asymptotic normality of parameter estimates is established under certain mixing conditions and we exemplify how the results may be applied in ecological studies of rainforests.  相似文献   

2.
We consider the problem of estimating the parameters of the covariance function of a stationary spatial random process. In spatial statistics, there are widely used parametric forms for the covariance functions, and various methods for estimating the parameters have been proposed in the literature. We develop a method for estimating the parameters of the covariance function that is based on a regression approach. Our method utilizes pairs of observations whose distances are closest to a value h>0h>0 which is chosen in a way that the estimated correlation at distance h is a predetermined value. We demonstrate the effectiveness of our procedure by simulation studies and an application to a water pH data set. Simulation studies show that our method outperforms all well-known least squares-based approaches to the variogram estimation and is comparable to the maximum likelihood estimation of the parameters of the covariance function. We also show that under a mixing condition on the random field, the proposed estimator is consistent for standard one parameter models for stationary correlation functions.  相似文献   

3.
空间回归模型由于引入了空间地理信息而使得其参数估计变得复杂,因为主要采用最大似然法,致使一般人认为在空间回归模型参数估计中不存在最小二乘法。通过分析空间回归模型的参数估计技术,研究发现,最小二乘法和最大似然法分别用于估计空间回归模型的不同的参数,只有将两者结合起来才能快速有效地完成全部的参数估计。数理论证结果表明,空间回归模型参数最小二乘估计量是最佳线性无偏估计量。空间回归模型的回归参数可以在估计量为正态性的条件下而实施显著性检验,而空间效应参数则不可以用此方法进行检验。  相似文献   

4.
The autologistic model, first introduced by Besag, is a popular tool for analyzing binary data in spatial lattices. However, no investigation was found to consider modeling of binary data clustered in uncorrelated lattices. Owing to spatial dependency of responses, the exact likelihood estimation of parameters is not possible. For circumventing this difficulty, many studies have been designed to approximate the likelihood and the related partition function of the model. So, the traditional and Bayesian estimation methods based on the likelihood function are often time-consuming and require heavy computations and recursive techniques. Some investigators have introduced and implemented data augmentation and latent variable model to reduce computational complications in parameter estimation. In this work, the spatially correlated binary data distributed in uncorrelated lattices were modeled using autologistic regression, a Bayesian inference was developed with contribution of data augmentation and the proposed models were applied to caries experiences of deciduous dents.  相似文献   

5.
We suggest locally parametric methods for estimating curves, such as boundaries of density supports or fault lines in response surfaces, in a variety of spatial problems. The methods are based on spatial approximations to the local likelihood that the curve passes through a given point in the plane, as a function of that point. The local likelihood might be a regular likelihood computed locally, with kernel weights (e.g. in the case of support boundary estimation) or a local version of a likelihood ratio statistic (e.g. in fault line estimation). In either case, the local likelihood surface represents a function which is relatively large near the target curve, and relatively small elsewhere. Therefore, the curve may be estimated as a ridge line of the surface; we require only a numerical algorithm for tracking the projection of a ridge into the plane. This approach offers several potential advantages over alternative methods. First, the local (log-)likelihood surface can be graphed, and the degree of 'ridginess' assessed visually, to determine how the level of local smoothing should be varied in different spatial locations in order to emphasize the ridge and hence the curve adequately. Secondly, the local likelihood surface does not need to be computed in anything like its entirety; once we have a reasonable approximation to a point on the curve we may track it by numerically 'walking along' the ridge line. Thirdly, the method is appropriate without change for many different types of spatial explanatory variables—gridded, stochastic or otherwise. Three examples are explored in detail; fault lines in response surfaces and in intensity or density surfaces, and boundaries of supports of probability densities.  相似文献   

6.
A two-step estimation approach is proposed for the fixed-effect parameters, random effects and their variance σ2 of a Poisson mixed model. In the first step, it is proposed to construct a small σ2-based approximate likelihood function of the data and utilize this function to estimate the fixed-effect parameters and σ2. In the second step, the random effects are estimated by minimizing their posterior mean squared error. Methods of Waclawiw and Liang (1993) based on so-called Stein-type estimating functions and of Breslow and Clayton (1993) based on penalized quasilikelihood are compared with the proposed likelihood method. The results of a simulation study on the performance of all three approaches are reported.  相似文献   

7.
Penalized likelihood estimators for truncated data   总被引:1,自引:0,他引:1  
We investigate the performance of linearly penalized likelihood estimators for estimating distributional parameters in the presence of data truncation. Truncation distorts the likelihood surface to create instabilities and high variance in the estimation of these parameters, and the penalty terms help in many cases to decrease estimation error and increase robustness. Approximate methods are provided for choosing a priori good penalty estimators, which are shown to perform well in a series of simulation experiments. The robustness of the methods is explored heuristically using both simulated and real data drawn from an operational risk context.  相似文献   

8.
Seasonal fractional ARIMA (ARFISMA) model with infinite variance innovations is used in the analysis of seasonal long-memory time series with large fluctuations (heavy-tailed distributions). Two methods, which are the empirical characteristic function (ECF) procedure developed by Knight and Yu [The empirical characteristic function in time series estimation. Econometric Theory. 2002;18:691–721] and the Two-Step method (TSM) are proposed to estimate the parameters of stable ARFISMA model. The ECF method estimates simultaneously all the parameters, while the TSM considers in the first step the Markov Chains Monte Carlo–Whittle approach introduced by Ndongo et al. [Estimation of long-memory parameters for seasonal fractional ARIMA with stable innovations. Stat Methodol. 2010;7:141–151], combined with the maximum likelihood estimation method developed by Alvarez and Olivares [Méthodes d'estimation pour des lois stables avec des applications en finance. Journal de la Société Française de Statistique. 2005;1(4):23–54] in the second step. Monte Carlo simulations are also used to evaluate the finite sample performance of these estimation techniques.  相似文献   

9.
This paper synthesizes a global approach to both Bayesian and likelihood treatments of the estimation of the parameters of a hidden Markov model in the cases of normal and Poisson distributions. The first step of this global method is to construct a non-informative prior based on a reparameterization of the model; this prior is to be considered as a penalizing and bounding factor from a likelihood point of view. The second step takes advantage of the special structure of the posterior distribution to build up a simple Gibbs algorithm. The maximum likelihood estimator is then obtained by an iterative procedure replicating the original sample until the corresponding Bayes posterior expectation stabilizes on a local maximum of the original likelihood function.  相似文献   

10.
In this article, we present the performance of the maximum likelihood estimates of the Burr XII parameters for constant-stress partially accelerated life tests under multiple censored data. Two maximum likelihood estimation methods are considered. One method is based on observed-data likelihood function and the maximum likelihood estimates are obtained by using the quasi-Newton algorithm. The other method is based on complete-data likelihood function and the maximum likelihood estimates are derived by using the expectation-maximization (EM) algorithm. The variance–covariance matrices are derived to construct the confidence intervals of the parameters. The performance of these two algorithms is compared with each other by a simulation study. The simulation results show that the maximum likelihood estimation via the EM algorithm outperforms the quasi-Newton algorithm in terms of the absolute relative bias, the bias, the root mean square error and the coverage rate. Finally, a numerical example is given to illustrate the performance of the proposed methods.  相似文献   

11.
12.
This paper proposes two methods of estimation for the parameters in a Poisson-exponential model. The proposed methods combine the method of moments with a regression method based on the empirical moment generating function. One of the methods is an adaptation of the mixed-moments procedure of Koutrouvelis & Canavos (1999). The asymptotic distribution of the estimator obtained with this method is derived. Finite-sample comparisons are made with the maximum likelihood estimator and the method of moments. The paper concludes with an exploratory-type analysis of real data based on the empirical moment generating function.  相似文献   

13.
This paper deals with the estimation of R = P(Y < X) when Y and X are two independent but not identically distributed Burr-type X random variables. Maximum likelihood, Bayes and empirical Bayes techniques are used for this purpose. Monte-Carlo simulation is carried out to compare the three methods of estimation. Also, two characterizations of the Burr-type X distribution are presented. The first characterization is based on the recurrence relationships between two successively conditional moments of a certain function of the random variable, whereas the second one is given by the conditional variance of that function.  相似文献   

14.
This paper presents a procedure utilizing the generalized maximum entropy (GME) estimation method in two steps to quantify the uncertainty of the simple linear structural measurement error model parameters exactly. The first step estimates the unknowns from the horizontal line, and then the estimates were used in a second step to estimate the unknowns from the vertical line. The proposed estimation procedure has the ability to minimize the number of unknown parameters in formulating the GME system within each step, and hence reduce variability of the estimates. Analytical and illustrative Monte Carlo simulation comparison experiments with the maximum likelihood estimators and a one-step GME estimation procedure were presented. Simulation experiments demonstrated that the two steps estimation procedure produced parameter estimates that are more accurate and more efficient than the classical estimation methods. An application of the proposed method is illustrated using a data set gathered from the Centre for Integrated Government Services in Delma Island – UAE to predict the association between perceived quality and the customer satisfaction.  相似文献   

15.
We present a maximum likelihood estimation procedure for the multivariate frailty model. The estimation is based on a Monte Carlo EM algorithm. The expectation step is approximated by averaging over random samples drawn from the posterior distribution of the frailties using rejection sampling. The maximization step reduces to a standard partial likelihood maximization. We also propose a simple rule based on the relative change in the parameter estimates to decide on sample size in each iteration and a stopping time for the algorithm. An important new concept is acquiring absolute convergence of the algorithm through sample size determination and an efficient sampling technique. The method is illustrated using a rat carcinogenesis dataset and data on vase lifetimes of cut roses. The estimation results are compared with approximate inference based on penalized partial likelihood using these two examples. Unlike the penalized partial likelihood estimation, the proposed full maximum likelihood estimation method accounts for all the uncertainty while estimating standard errors for the parameters.  相似文献   

16.
This paper develops Bayesian analysis in the context of progressively Type II censored data from the compound Rayleigh distribution. The maximum likelihood and Bayes estimates along with associated posterior risks are derived for reliability performances under balanced loss functions by assuming continuous priors for parameters of the distribution. A practical example is used to illustrate the estimation methods. A simulation study has been carried out to compare the performance of estimates. The study indicates that Bayesian estimation should be preferred over maximum likelihood estimation. In Bayesian estimation, the balance general entropy loss function can be effectively employed for optimal decision-making.  相似文献   

17.
In this paper, we use a particular piecewise deterministic Markov process (PDMP) to model the evolution of a degradation mechanism that may arise in various structural components, namely, the fatigue crack growth. We first derive some probability results on the stochastic dynamics with the help of Markov renewal theory: a closed-form solution for the transition function of the PDMP is given. Then, we investigate some methods to estimate the parameters of the dynamical system, involving Bogolyubov's averaging principle and maximum likelihood estimation for the infinitesimal generator of the underlying jump Markov process. Numerical applications on a real crack data set are given.  相似文献   

18.
The paper deals with discrete-time regression models to analyze multistate—multiepisode models for event history data or failure time data collected in follow-up studies, retrospective studies, or longitudinal panels. The models are applicable if the events are not dated exactly but only a time interval is recorded. The models include individual specific parameters to account for unobserved heterogeneity. The explantory variables may be time-varying and random with distributions depending on the observed history of the process. Different estimation procedures are considered: Estimation of structural as well as individual specific parameters by maximization of a joint likelihood function, estimation of the structural parameters by maximization of a conditional likelihood function conditioning on a set of sufficient statistics for the individual specific parameters, and estimation of the structural parameters by maximization of a marginal likelihood function assuming that the individual specific parameters follow a distribution. The advantages and limitations of the different approaches are discussed.  相似文献   

19.
Detecting local spatial clusters for count data is an important task in spatial epidemiology. Two broad approaches—moving window and disease mapping methods—have been suggested in some of the literature to find clusters. However, the existing methods employ somewhat arbitrarily chosen tuning parameters, and the local clustering results are sensitive to the choices. In this paper, we propose a penalized likelihood method to overcome the limitations of existing local spatial clustering approaches for count data. We start with a Poisson regression model to accommodate any type of covariates, and formulate the clustering problem as a penalized likelihood estimation problem to find change points of intercepts in two-dimensional space. The cost of developing a new algorithm is minimized by modifying an existing least absolute shrinkage and selection operator algorithm. The computational details on the modifications are shown, and the proposed method is illustrated with Seoul tuberculosis data.  相似文献   

20.
Estimation of scale and index parameters of positive stable laws is considered. Maximum likelihood estimation is known to be efficient, but very difficult to compute, while methods based on the sample characteristic function are computationally easy, but have uncertain efficiency properties.
In this paper an estimation method is presented which is reasonably easy to compute, and which has good efficiency properties, at least when the index α (0, 0.5). The method is based on an expression for the characteristic function of the logarithm of a positive stable random variable, and is derived by relating the stable estimation problem to that of location/scale estimation in extreme-value-distribution families, for which efficient methods are known.
The proposed method has efficiency which →1 as α→,but on the other hand, efficiencies deteriorate after α >0.5, and in fact appear to →0 as α+ 1.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号