共查询到20条相似文献,搜索用时 15 毫秒
1.
Carmen Fernández Peter J. Green 《Journal of the Royal Statistical Society. Series B, Statistical methodology》2002,64(4):805-826
Summary. The paper develops mixture models for spatially indexed data. We confine attention to the case of finite, typically irregular, patterns of points or regions with prescribed spatial relationships, and to problems where it is only the weights in the mixture that vary from one location to another. Our specific focus is on Poisson-distributed data, and applications in disease mapping. We work in a Bayesian framework, with the Poisson parameters drawn from gamma priors, and an unknown number of components. We propose two alternative models for spatially dependent weights, based on transformations of autoregressive Gaussian processes: in one (the logistic normal model), the mixture component labels are exchangeable; in the other (the grouped continuous model), they are ordered. Reversible jump Markov chain Monte Carlo algorithms for posterior inference are developed. Finally, the performances of both of these formulations are examined on synthetic data and real data on mortality from a rare disease. 相似文献
2.
Maria Iannario 《统计学通讯:理论与方法》2014,43(4):771-786
In this article we introduce a probability distribution generated by a mixture of discrete random variables to capture uncertainty, feeling, and overdispersion, possibly present in ordinal data surveys. The choice of the components of the new model is motivated by a study on the data generating process. Inferential issues concerning the maximum likelihood estimates and the validation steps are presented; then, some empirical analyses are given to support the usefulness of the approach. Discussion on further extensions of the model ends the article. 相似文献
3.
Identification of long memory in GARCH models 总被引:1,自引:1,他引:0
Abstract: This work extends the analysis of Baillie, Bollerslev and Mikkelsen (1996) and Bollerslev and Mikkelsen (1996) on the estimation and identification problems of the Fractionally Integrated Generalized Autoregressive Conditional Heteroskedastik (FIGARCH) model. We assess the power of different information criteria and tests in identifying the presence of long memory in the conditional variances. The analysis is performed with a Montecarlo simulation study. In detail, the focus on the Akaike, Hannan-Quinn, Shibata and Schwarz information criteria and on the Jarque-Bera test for normality, Box-Pierce test for residual correlation and Engle test for ARCH effects. This study verifies that information criteria clearly distinguish the presence of long memory while tests do not evidence any difference between the fitted long and short memory models. An empirical application is provided; it analyses, on a high frequency dataset, the returns of the FIB30, the future on the MIB30, the Italian stock market index of highly capitalized firms.Massimiliano Caporin: mcaporin@unive.itThis paper was presented at the SIS 2002 Conference (Italian Statistical society annual meeting) held in Milan, University Bicocca, 5-7 June 2002. A short version of this work can be found in the proceedings of the conference 相似文献
4.
In this paper we have considered the problem of finding admissible estimates for a fairly general class of parametric functions
in the so called “non-regular” type of densities. The admissibility of generalized Bayes and Pitman estimates of functions
of parameters have been established under entropy loss function. 相似文献
5.
In the context of ACD models for ultra-high frequency data different specifications are available to estimate the conditional mean of intertrade durations, while quantiles estimation has been completely neglected by literature, even if to trading extent it can be more informative. The main problem arising with quantiles estimation is the correct specification of durations probability law: the usual assumption of Exponentially distributed residuals, is very robust for the estimation of parameters of the conditional mean, but dramatically fails the distributional fit. In this paper a semiparametric approach is formalized, and compared with the parametric one, deriving from Exponential assumption. Empirical evidence for a stock of Italian financial market strongly supports the former approach.Paola Zuccolotto: The author wishes to thank Prof. A. Mazzali, Dott. G. De Luca, Dott. M. Sandri for valuable comments. 相似文献
6.
Consider the problem of obtaining a confidence interval for some function g(θ) of an unknown parameter θ, for which a (1-α)-confidence
interval is given. If g(θ) is one-to-one the solution is immediate. However, if g is not one-to-one the problem is more complex
and depends on the structure of g. In this note the situation where g is a nonmonotone convex function is considered. Based
on some inequality, a confidence interval for g(θ) with confidence level at least 1-α is obtained from the given (1-α) confidence
interval on θ. Such a result is then applied to the n(μ, σ
2) distribution with σ known. It is shown that the coverage probability of the resulting confidence interval, while being greater
than 1-α, has in addition an upper bound which does not exceed Θ(3z1−α/2)-α/2. 相似文献
7.
K. H. Loesgen 《Statistical Papers》1990,31(1):147-154
Pliskin (1987) and Trenkler (1988) compared ridge-type estimators with good prior means. From a Bayesian viewpoint, these
estimators are special cases of Bayesestimators and the mean square error matrix comparisons can be made in the more general
case. 相似文献
8.
9.
Eshetu Wencheko 《Statistical Papers》2000,41(3):327-343
In the present paper estimators of the signal-to-noise are given. A simulation study is conducted in order to see how the
proposed estimators perform relative to the naive estimator by way of scalar risk comparison. The results favour our suggested
estimators. 相似文献
10.
The conditional likelihood is widely used in logistic regression models with stratified binary data. In particular, it leads to accurate inference for the parameters of interest, which are common to all strata, eliminating stratum-specific nuisance parameters. The modified profile likelihood is an accurate approximation to the conditional likelihood, but has the advantage of being available for general parametric models. Here, we propose the modified profile likelihood as an ideal extension of the conditional likelihood in generalized linear models for binary data, with generic link function. An important feature is that for the implementation we only need standard outputs of routines for generalized linear models. The accuracy of the method is supported by theoretical properties and is confirmed by simulation results.This research was supported by MIUR COFIN 2001-2003. 相似文献
11.
In this paper we consider the problem of maximum likelihood (ML) estimation in the classical AR(1) model with i.i.d. symmetric
stable innovations with known characteristic exponent and unknown scale parameter. We present an approach that allows us to
investigate the properties of ML estimators without making use of numerical procedures. Finally, we introduce a generalization
to the multivariate case. 相似文献
12.
In this paper, we derive prediction distribution of future response(s) from the normal distribution assuming a generalized
inverse Gaussian (GIG) prior density for the variance. The GIG includes as special cases the inverse Gaussian, the inverted
chi-squared and gamma distributions. The results lead to Bessel-type prediction distributions which is in contrast with the
Student-t distributions usually obtained using the inverted chi-squared prior density for the variance. Further, the general structure
of GIG provides us with new flexible prediction distributions which include as special cases most of the earlier results obtained
under normal-inverted chi-squared or vague priors. 相似文献
13.
This paper deals with the construction of optimum partitions
of
for a clustering criterion which is based on a convex function of the class centroids
as a generalization of the classical SSQ clustering criterion for n data points. We formulate a dual optimality problem involving two sets of variables and derive a maximum-support-plane (MSP) algorithm for constructing a (sub-)optimum partition as a generalized k-means algorithm. We present various modifications of the basic criterion and describe the corresponding MSP algorithm. It is shown that the method can also be used for solving optimality problems in classical statistics (maximizing Csiszárs
-divergence) and for simultaneous classification of the rows and columns of a contingency table. 相似文献
14.
The maximum likelihood estimation for the critical points of the failure rate and the mean residual life function are presented
in the case of mixture inverse Gaussian model. Several important data sets are analyzed from this point of view. For each
of the data sets, Bootstrapping is used to construct confidence intervals of the critical points. 相似文献
15.
Starting from the theory of the Nonparametric Combination of Dependent Permutation Tests (Pesarin, 1992, 2001), Marozzi (2002a, b) proposed two bi-aspect nonparametric tests for the two-sample and the multi-sample location problems. These tests are shown by simulation to be remarkably more powerful than the traditional parametric and permutation competitors (which can be seen as uni-aspect tests) under heavy-tailed and skewed distributions. After a brief presentation of the bi-aspect idea to location testing problems, three actual applications are discussed. The first one is a problem of business statistics and deals with the analysis of time for service calls. The second one is in medical statistics and deals with the analysis of the effect of cigarette smoking on maternal airway function during pregnancy. The third one is in industrial statistics and deals with the analysis of the setting of machines that produce steel ball bearings. The bi-aspect testing allows us to draw deeper and more informative inference than that allowed by traditional competitors.Marco Marozzi: Part of the research was done when the author was in Dipartimento di Scienze Statistiche, Universitá di Bologna, Italy. 相似文献
16.
Characterization of an optimal vector estimator and an optimal matrix estimator are obtained. In each case appropriate convex
loss functions are considered. The results are illustrated through the problems of simultaneous unbiased estimation, simultaneous
equivariant estimation and simultaneous unbiased prediction. Further an optimality criterion is proposed for matrix unbiased
estimation and it is shown that the matrix unbiased estimation of a matrix parametric function and the minimum variance unbiased
estimation of its components are equivalent. 相似文献
17.
18.
Understanding patterns in the frequency of extreme natural events, such as earthquakes, is important as it helps in the prediction of their future occurrence and hence provides better civil protection. Distributions describing these events are known to be heavy tailed and positive skew making standard distributions unsuitable for modelling the frequency of such events. The Birnbaum–Saunders distribution and its extreme value version have been widely studied and applied due to their attractive properties. We derive L-moment equations for these distributions and propose novel methods for parameter estimation, goodness-of-fit assessment and model selection. A simulation study is conducted to evaluate the performance of the L-moment estimators, which is compared to that of the maximum likelihood estimators, demonstrating the superiority of the proposed methods. To illustrate these methods in a practical application, a data analysis of real-world earthquake magnitudes, obtained from the global centroid moment tensor catalogue during 1962–2015, is carried out. This application identifies the extreme value Birnbaum–Saunders distribution as a better model than classic extreme value distributions for describing seismic events. 相似文献
19.
Thomas Nittner 《Statistical Methods and Applications》2003,12(2):195-210
The additive model
is considered when some observations on x are missing at random but corresponding observations on y are available. Especially for this model, missing at random is an interesting case because the complete case analysis is expected to be no more suitable. A simulation experiment is reported and the different methods are compared based on their superiority with respect to the sample mean squared error. Some focus is also given on the sample variance and the estimated bias. In detail, the complete case analysis, a kind of stochastic mean imputation, a single imputation and the nearest neighbor imputation are discussed. 相似文献