首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
A simulation experiment compares the accuracy and precision of three alternate estimation techniques for the parameters of the STARMA model. Maximum likelihood estimation, in most ways the "best" estimation procedure, involves a large amount of computational effort so that two approximate techniques, exact least squares and conditional maximum likelihood, are often proposed for series of moderate lengths. This simulation experiment compares the accuracy of these three estimation procedures for simulated series of various lengths, and discusses the appropriateness of the three procedures as a function of the length of the observed series.  相似文献   

2.
This paper considers the problem of estimating the linear parameters of a Generalised Linear Model (GLM) when the explanatory variable is subject to measurement error. In this situation the induced model for dependence on the approximate explanatory variable is not usually of GLM form. However, when the distribution of measurement error is known or estimated from replicated measurements, application of the GLIM iteratively reweighted least squares algorithm with transformed data and weighting is shown to produce maximum quasi likelihood estimates in many cases. Details of this approach are given for two particular generalized linear models; simulation results illustrate the usefulness of the theory for these models.  相似文献   

3.
In this work, we present a computational method to approximate the occurrence of the change-points in a temporal series consisting of independent and normally distributed observations, with equal mean and two possible variance values. This type of temporal series occurs in the investigation of electric signals associated to rhythmic activity patterns of nerves and muscles of animals, in which the change-points represent the actual moments when the electrical activity passes from a phase of silence to one of activity, or vice versa. We confront the hypothesis that there is no change-point in the temporal series, against the alternative hypothesis that there exists at least one change-point, employing the corresponding likelihood ratio as the test statistic; a computational implementation of the technique of quadratic penalization is employed in order to approximate the quotient of the logarithmic likelihood associated to the set of hypotheses. When the null hypothesis is rejected, the method provides estimations of the localization of the change-points in the temporal series. Moreover, the method proposed in this work employs a posteriori processing in order to avoid the generation of relatively short periods of silence or activity. The method is applied to the determination of change-points in both experimental and synthetic data sets; in either case, the results of our computations are more than satisfactory.  相似文献   

4.
5.
6.
Properties of the Weibull cumulative exposure model   总被引:1,自引:0,他引:1  
This article is aimed at the investigation of some properties of the Weibull cumulative exposure model on multiple-step step-stress accelerated life test data. Although the model includes a probabilistic idea of Miner's rule in order to express the effect of cumulative damage in fatigue, our result shows that the application of only this is not sufficient to express degradation of specimens and the shape parameter must be larger than 1. For a random variable obeying the model, its average and standard deviation are investigated on a various sets of parameter values. In addition, a way of checking the validity of the model is illustrated through an example of the maximum likelihood estimation on an actual data set, which is about time to breakdown of cross-linked polyethylene-insulated cables.  相似文献   

7.
For a general univariate “errors-in-variables” model, the maximum likelihood estimate of the parameter vector (assuming normality of the errors), which has been described in the literature, can be expressed in an alternative form. In this form, the estimate is computationally simpler, and deeper investigation of its properties is facilitated. In particular, w demonstrate that, under conditions a good deal less restrictive than those which have been previously assumed, the estimate is weakly consistent.  相似文献   

8.
In this paper, we study inference in a heteroscedastic measurement error model with known error variances. Instead of the normal distribution for the random components, we develop a model that assumes a skew-t distribution for the true covariate and a centred Student's t distribution for the error terms. The proposed model enables to accommodate skewness and heavy-tailedness in the data, while the degrees of freedom of the distributions can be different. Maximum likelihood estimates are computed via an EM-type algorithm. The behaviour of the estimators is also assessed in a simulation study. Finally, the approach is illustrated with a real data set from a methods comparison study in Analytical Chemistry.  相似文献   

9.
Pseudo maximum likelihood estimation (PML) for the Dirich-let-multinomial distribution is proposed and examined in this pa-per. The procedure is compared to that based on moments (MM) for its asymptotic relative efficiency (ARE) relative to the maximum likelihood estimate (ML). It is found that PML, requiring much less computational effort than ML and possessing considerably higher ARE than MM, constitutes a good compromise between ML and MM. PML is also found to have very high ARE when an estimate for the scale parameter in the Dirichlet-multinomial distribution is all that is needed.  相似文献   

10.
11.
12.
One important goal of experimentation in quality improvement is to minimize the variability of a product or process around a target mean value. Factors which affect variances as well as factors that affect the mean can be identified using the analysis of mean and dispersion. Box and Meyer (1986b) proposed a method of model identification and maximum likelihood estimation for mean and dispersion effects from unreplicated designs. In this article, we address two problems associated with MLE’s. First, asymptotic variance of MLE's for dispersion effects which can be used to judge the significance of factors can be misleading. A possible explanation is provided; simulation results also indicate that the asymptotic, variance underestimates.  相似文献   

13.
The necessary statistic for constructing (1-α)% contour semiellipses of the distribution surface corresponding to the singly truncated bivariate normal is derived and 1t5 percentages tabulated, An approximate goodness-of-fit test which uses the derived statistic is indicated and an example given.  相似文献   

14.
Hypertension is a highly prevalent cardiovascular disease. It marks a considerable cost factor to many national health systems. Despite its prevalence, regional disease distributions are often unknown and must be estimated from survey data. However, health surveys frequently lack in regional observations due to limited resources. Obtained prevalence estimates suffer from unacceptably large sampling variances and are not reliable. Small area estimation solves this problem by linking auxiliary data from multiple regions in suitable regression models. Typically, either unit- or area-level observations are considered for this purpose. But with respect to hypertension, both levels should be used. Hypertension has characteristic comorbidities and is strongly related to lifestyle features, which are unit-level information. It is also correlated with socioeconomic indicators that are usually measured on the area-level. But the level combination is challenging as it requires multi-level model parameter estimation from small samples. We use a multi-level small area model with level-specific penalization to overcome this issue. Model parameter estimation is performed via stochastic coordinate gradient descent. A jackknife estimator of the mean squared error is presented. The methodology is applied to combine health survey data and administrative records to estimate regional hypertension prevalence in Germany.  相似文献   

15.
In earlier work, Kirchner [An estimation procedure for the Hawkes process. Quant Financ. 2017;17(4):571–595], we introduced a nonparametric estimation method for the Hawkes point process. In this paper, we present a simulation study that compares this specific nonparametric method to maximum-likelihood estimation. We find that the standard deviations of both estimation methods decrease as power-laws in the sample size. Moreover, the standard deviations are proportional. For example, for a specific Hawkes model, the standard deviation of the branching coefficient estimate is roughly 20% larger than for MLE – over all sample sizes considered. This factor becomes smaller when the true underlying branching coefficient becomes larger. In terms of runtime, our method clearly outperforms MLE. The present bias of our method can be well explained and controlled. As an incidental finding, we see that also MLE estimates seem to be significantly biased when the underlying Hawkes model is near criticality. This asks for a more rigorous analysis of the Hawkes likelihood and its optimization.  相似文献   

16.
17.
In scenarios where the variance of a response variable can be attributed to two sources of variation, a confidence interval for a ratio of variance components gives information about the relative importance of the two sources. For example, if measurements taken from different laboratories are nine times more variable than the measurements taken from within the laboratories, then 90% of the variance in the responses is due to the variability amongst the laboratories and 10% of the variance in the responses is due to the variability within the laboratories. Assuming normally distributed sources of variation, confidence intervals for variance components are readily available. In this paper, however, simulation studies are conducted to evaluate the performance of confidence intervals under non-normal distribution assumptions. Confidence intervals based on the pivotal quantity method, fiducial inference, and the large-sample properties of the restricted maximum likelihood (REML) estimator are considered. Simulation results and an empirical example suggest that the REML-based confidence interval is favored over the other two procedures in unbalanced one-way random effects model.  相似文献   

18.
In this paper considering an appropriate transformation on the Lindley distribution, we propose the unit-Lindley distribution and investigate some of its statistical properties. An important fact associated with this new distribution is that it is possible to obtain the analytical expression for bias correction of the maximum likelihood estimator. Moreover, it belongs to the exponential family. This distribution allows us to incorporate covariates directly in the mean and consequently to quantify their influences on the average of the response variable. Finally, a practical application is presented to show that our model fits much better than the Beta regression.  相似文献   

19.
Abstract

In this article, we obtain point and interval estimates of multicomponent stress-strength reliability model of an s-out-of-j system using classical and Bayesian approaches by assuming both stress and strength variables follow a Chen distribution with a common shape parameter which may be known or unknown. The uniformly minimum variance unbiased estimator of reliability is obtained analytically when the common parameter is known. The behavior of proposed reliability estimates is studied using the estimated risks through Monte Carlo simulations and comments are obtained. Finally, a data set is analyzed for illustrative purposes.  相似文献   

20.
While much used in practice, latent variable models raise challenging estimation problems due to the intractability of their likelihood. Monte Carlo maximum likelihood (MCML), as proposed by Geyer & Thompson (1992 ), is a simulation-based approach to maximum likelihood approximation applicable to general latent variable models. MCML can be described as an importance sampling method in which the likelihood ratio is approximated by Monte Carlo averages of importance ratios simulated from the complete data model corresponding to an arbitrary value of the unknown parameter. This paper studies the asymptotic (in the number of observations) performance of the MCML method in the case of latent variable models with independent observations. This is in contrast with previous works on the same topic which only considered conditional convergence to the maximum likelihood estimator, for a fixed set of observations. A first important result is that when is fixed, the MCML method can only be consistent if the number of simulations grows exponentially fast with the number of observations. If on the other hand, is obtained from a consistent sequence of estimates of the unknown parameter, then the requirements on the number of simulations are shown to be much weaker.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号