首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 828 毫秒
1.
In a Bayesian analysis of finite mixture models, parameter estimation and clustering are sometimes less straightforward than might be expected. In particular, the common practice of estimating parameters by their posterior mean, and summarizing joint posterior distributions by marginal distributions, often leads to nonsensical answers. This is due to the so-called 'label switching' problem, which is caused by symmetry in the likelihood of the model parameters. A frequent response to this problem is to remove the symmetry by using artificial identifiability constraints. We demonstrate that this fails in general to solve the problem, and we describe an alternative class of approaches, relabelling algorithms , which arise from attempting to minimize the posterior expected loss under a class of loss functions. We describe in detail one particularly simple and general relabelling algorithm and illustrate its success in dealing with the label switching problem on two examples.  相似文献   

2.
A range of procedures in both robustness and diagnostics require optimisation of a target functional over all subsamples of given size. Whereas such combinatorial problems are extremely difficult to solve exactly, something less than the global optimum can be ‘good enough’ for many practical purposes, as shown by example. Again, a relaxation strategy embeds these discrete, high-dimensional problems in continuous, low-dimensional ones. Overall, nonlinear optimisation methods can be exploited to provide a single, reasonably fast algorithm to handle a wide variety of problems of this kind, thereby providing a certain unity. Four running examples illustrate the approach. On the robustness side, algorithmic approximations to minimum covariance determinant (MCD) and least trimmed squares (LTS) estimation. And, on the diagnostic side, detection of multiple multivariate outliers and global diagnostic use of the likelihood displacement function. This last is developed here as a global complement to Cook’s (in J. R. Stat. Soc. 48:133–169, 1986) local analysis. Appropriate convergence of each branch of the algorithm is guaranteed for any target functional whose relaxed form is—in a natural generalisation of concavity, introduced here—‘gravitational’. Again, its descent strategy can downweight to zero contaminating cases in the starting position. A simulation study shows that, although not optimised for the LTS problem, our general algorithm holds its own with algorithms that are so optimised. An adapted algorithm relaxes the gravitational condition itself.  相似文献   

3.
Summary The paper first provides a short review of the most common microeconometric models including logit, probit, discrete choice, duration models, models for count data and Tobit-type models. In the second part we consider the situation that the micro data have undergone some anonymization procedure which has become an important issue since otherwise confidentiality would not be guaranteed. We shortly describe the most important approaches for data protection which also can be seen as creating errors of measurement by purpose. We also consider the possibility of correcting the estimation procedure while taking into account the anonymization procedure. We illustrate this for the case of binary data which are anonymized by ‘post-randomization’ and which are used in a probit model. We show the effect of ‘naive’ estimation, i. e. when disregarding the anonymization procedure. We also show that a ‘corrected’ estimate is available which is satisfactory in statistical terms. This is also true if parameters of the anonymization procedure have to be estimated, too. Research in this paper is related to the project “Faktische Anonymisierung wirtschaftsstatistischer Einzeldaten” financed by German Ministry of Research and Technology.  相似文献   

4.
Reversible jump Markov chain Monte Carlo (RJMCMC) algorithms can be efficiently applied in Bayesian inference for hidden Markov models (HMMs), when the number of latent regimes is unknown. As for finite mixture models, when priors are invariant to the relabelling of the regimes, HMMs are unidentifiable in data fitting, because multiple ways to label the regimes can alternate during the MCMC iterations; this is the so-called label switching problem. HMMs with an unknown number of regimes are considered here and the goal of this paper is the comparison, both applied and theoretical, of five methods used for tackling label switching within a RJMCMC algorithm; they are: post-processing, partial reordering, permutation sampling, sampling from a Markov prior and rejection sampling. The five strategies we compare have been proposed mostly in the literature of finite mixture models and only two of them, i.e. rejection sampling and partial reordering, have been presented in RJMCMC algorithms for HMMs. We consider RJMCMC algorithms in which the parameters are updated by Gibbs sampling and the dimension of the model changes in split-and-merge and birth-and-death moves. Finally, an example illustrates and compares the five different methodologies.  相似文献   

5.
Markov chain Monte Carlo (MCMC) algorithms have been shown to be useful for estimation of complex item response theory (IRT) models. Although an MCMC algorithm can be very useful, it also requires care in use and interpretation of results. In particular, MCMC algorithms generally make extensive use of priors on model parameters. In this paper, MCMC estimation is illustrated using a simple mixture IRT model, a mixture Rasch model (MRM), to demonstrate how the algorithm operates and how results may be affected by some commonly used priors. Priors on the probabilities of mixtures, label switching, model selection, metric anchoring, and implementation of the MCMC algorithm using WinBUGS are described, and their effects illustrated on parameter recovery in practical testing situations. In addition, an example is presented in which an MRM is fitted to a set of educational test data using the MCMC algorithm and a comparison is illustrated with results from three existing maximum likelihood estimation methods.  相似文献   

6.
The present study deals with the method of estimation of the parameters of k-components load-sharing parallel system model in which each component’s failure time distribution is assumed to be geometric. The maximum likelihood estimates of the load-share parameters with their standard errors are obtained. (1 − γ) 100% joint, Bonferroni simultaneous and two bootstrap confidence intervals for the parameters have been constructed. Further, recognizing the fact that life testing experiments are time consuming, it seems realistic to consider the load-share parameters to be random variable. Therefore, Bayes estimates along with their standard errors of the parameters are obtained by assuming Jeffrey’s invariant and gamma priors for the unknown parameters. Since, Bayes estimators can not be found in closed form expressions, Tierney and Kadane’s approximation method have been used to compute Bayes estimates and standard errors of the parameters. Markov Chain Monte Carlo technique such as Gibbs sampler is also used to obtain Bayes estimates and highest posterior density credible intervals of the load-share parameters. Metropolis–Hastings algorithm is used to generate samples from the posterior distributions of the unknown parameters.  相似文献   

7.
Label switching is a well-known and fundamental problem in Bayesian estimation of finite mixture models. It arises when exploring complex posterior distributions by Markov Chain Monte Carlo (MCMC) algorithms, because the likelihood of the model is invariant to the relabelling of mixture components. If the MCMC sampler randomly switches labels, then it is unsuitable for exploring the posterior distributions for component-related parameters. In this paper, a new procedure based on the post-MCMC relabelling of the chains is proposed. The main idea of the method is to perform a clustering technique on the similarity matrix, obtained through the MCMC sample, whose elements are the probabilities that any two units in the observed sample are drawn from the same component. Although it cannot be generalized to any situation, it may be handy in many applications because of its simplicity and very low computational burden.  相似文献   

8.
The problems of estimating the reliability function and P=PrX > Y are considered for the generalized life distributions. Uniformly minimum variance unbiased estimators (UMVUES) of the powers of the parameter involved in the probabilistic model and the probability density function (pdf) at a specified point are derived. The UMVUE of the pdf is utilized to obtain the UMVUE of the reliability function and ‘P’. Our method of obtaining these estimators is quite simple than the traditional approaches. A theoretical method of studying the behaviour of the hazard-rate is provided.  相似文献   

9.
For an adjustment of contingency tables to prescribed marginal frequencies Deming and Stephan (1940) minimize a Chi-square expression. Asymptotically equivalently, Ireland and Kullback (1968) minimize a Leibler-Kullback divergence, where the probabilistical arguments for both methods remain vague. Here we deduce a probabilistical model based on observed contingency tables. It shows that the two above methods and the maximum likelihood approach in Smith (1947) yield asymptotically the ‘most probable’ adjustment under prescribed marginal frequencies. The fundamental hypothesis of statistical mechanics relates observations to ‘most probable’ realizations. ‘Most probable’ is going to be used in the sense of so-called large deviations. The proposed adjustment has a significant product form and will be generalized to contingency tables with infinitely many cells.  相似文献   

10.
Copula, marginal distributions and model selection: a Bayesian note   总被引:3,自引:0,他引:3  
Copula functions and marginal distributions are combined to produce multivariate distributions. We show advantages of estimating all parameters of these models using the Bayesian approach, which can be done with standard Markov chain Monte Carlo algorithms. Deviance-based model selection criteria are also discussed when applied to copula models since they are invariant under monotone increasing transformations of the marginals. We focus on the deviance information criterion. The joint estimation takes into account all dependence structure of the parameters’ posterior distributions in our chosen model selection criteria. Two Monte Carlo studies are conducted to show that model identification improves when the model parameters are jointly estimated. We study the Bayesian estimation of all unknown quantities at once considering bivariate copula functions and three known marginal distributions.  相似文献   

11.
We describe a novel spatial-temporal algorithm for generating packing structures of disks and spheres, which not only incorporates all the attractive features of existing algorithms but is also more flexible in defining spatial interactions and other control parameters. The advantage of this approach lies in the ability of marks to exploit to best advantage the space available to them by changing their size in response to the interaction pressure of their neighbours. Allowing particles to move in response to such pressure results in high-intensity packing. Indeed, since particles may temporarily overlap, even under hard-packing scenarios, they possess a greater potential for rearranging themselves, and thereby creating even higher packing intensities than exist under other strategies. Non-overlapping pattern structures are achieved simply by allowing the process to ‘burn-out’ at the end of its development period. A variety of different growth-interaction regimes are explored, both symmetric and asymmetric, and the convergence issues that they raise are examined. We conjecture that not only may this algorithm be easily generalised to cover a large variety of situations across a wide range of disciplines, but that appropriately targeted generalisations may well include established packing algorithms as special cases.  相似文献   

12.
This paper proposes a new probabilistic classification algorithm using a Markov random field approach. The joint distribution of class labels is explicitly modelled using the distances between feature vectors. Intuitively, a class label should depend more on class labels which are closer in the feature space, than those which are further away. Our approach builds on previous work by Holmes and Adams (J. R. Stat. Soc. Ser. B 64:295–306, 2002; Biometrika 90:99–112, 2003) and Cucala et al. (J. Am. Stat. Assoc. 104:263–273, 2009). Our work shares many of the advantages of these approaches in providing a probabilistic basis for the statistical inference. In comparison to previous work, we present a more efficient computational algorithm to overcome the intractability of the Markov random field model. The results of our algorithm are encouraging in comparison to the k-nearest neighbour algorithm.  相似文献   

13.
Classical nondecimated wavelet transforms are attractive for many applications. When the data comes from complex or irregular designs, the use of second generation wavelets in nonparametric regression has proved superior to that of classical wavelets. However, the construction of a nondecimated second generation wavelet transform is not obvious. In this paper we propose a new ‘nondecimated’ lifting transform, based on the lifting algorithm which removes one coefficient at a time, and explore its behavior. Our approach also allows for embedding adaptivity in the transform, i.e. wavelet functions can be constructed such that their smoothness adjusts to the local properties of the signal. We address the problem of nonparametric regression and propose an (averaged) estimator obtained by using our nondecimated lifting technique teamed with empirical Bayes shrinkage. Simulations show that our proposed method has higher performance than competing techniques able to work on irregular data. Our construction also opens avenues for generating a ‘best’ representation, which we shall explore.  相似文献   

14.
In this paper we present a methodology for the study of multi-dimensional aspects of poverty and deprivation. The conventional poor/non-poor dichotomy is replaced by defining poverty as a matter of degree, determined by the place of the individual in the income distribution. The fuzzy poverty measure proposed is in fact also expressible in terms of the generalised Gini measure. The same methodology facilitates the inclusion of other dimensions of deprivation into the analysis: by appropriately weighting indicators of deprivation to reflect their dispersion and correlation, we can construct measures of non-monetary deprivation in its various dimensions. These indicators illuminate the extent to which purely monetary indicators are insufficient in themselves in capturing the prevalence of deprivation. An important contribution of the paper is to identify rules for the aggregation of fuzzy sets appropriate for the study of poverty and deprivation. In particular, we define a ‘composite’ fuzzy set operator which takes into account whether the sets being aggregated are of a ‘similar’ or a ‘dissimilar’ type. These rules allow us to meaningfully combine income and the diverse non-income deprivation indices at the micro-level and construct what we have termed ‘intensive’ and ‘extensive’ indicators of deprivation. We note that mathematically the same approach can be carried over to the study of persistence of poverty and deprivation over time.  相似文献   

15.
Billari (2001) introduced a new type of single-spell parametric transition-rate model: transition-rate models with a starting threshold. In such models, the transition-rate function is composed of two additive terms. The first term is a constant that holds for any given duration; the second is a ‘traditional’ transition-rate function with the threshold as its time origin, and it is added after a certain threshold point. The possibility of allowing for the presence of long-term survivors in the social process has not yet been dealt with, and it is of specific interest in several domains of application. In this paper, we develop the specific case of the sickle model. We discuss its features, its implementation as a starting threshold model, and the estimation of its parameters. The sickle model with starting threshold is then applied to the union formation of Italian men and women, using the Fertility and Family Survey data.  相似文献   

16.
Recent ‘marginal’ methods for the regression analysis of multivariate failure time data have mostly assumed Cox (1972)model hazard functions in which the members of the cluster have distinct baseline hazard functions. In some important applications, including sibling family studies in genetic epidemiology and group randomized intervention trials, a common baseline hazard assumption is more natural. Here we consider a weighted partial likelihood score equation for the estimation of regression parameters under a common baseline hazard model, and provide corresponding asymptotic distribution theory. An extensive series of simulation studies is used to examine the adequacy of the asymptotic distributional approximations, and especially the efficiency gain due to weighting, as a function of strength of dependency within cluster, and cluster size. This revised version was published online in July 2006 with corrections to the Cover Date.  相似文献   

17.
P-splines regression provides a flexible smoothing tool. In this paper we consider difference type penalties in a context of nonparametric generalized linear models, and investigate the impact of the order of the differencing operator. Minimizing Akaike’s information criterion we search for a possible best data-driven value of the differencing order. Theoretical derivations are established for the normal model and provide insights into a possible ‘optimal’ choice of the differencing order and its interrelation with other parameters. Applications of the selection procedure to non-normal models, such as Poisson models, are given. Simulation studies investigate the performance of the selection procedure and we illustrate its use on real data examples.  相似文献   

18.
Statistical Methods & Applications - The current literature views Simpson’s paradox as a probabilistic conundrum by taking the premises (probabilities/parameters/ frequencies) as known....  相似文献   

19.
Summary Nonsymmetric correspondence analysis is a model meant for the analysis of the dependence in a two-way continengy table, and is an alternative to correspondence analysis. Correspondence analysis is based on the decomposition of Pearson's Ф2-index Nonsymmetric correspondence analysis is based on the decomposition of Goodman-Kruskal's τ-index for predicatablity. In this paper, we approach nonsymmetric correspondence analysis as a statistical model based on a probability distribution. We provide algorithms for the maximum likelihood and the least-squares estimation with linear constraints upon model parameters. The nonsymmetric correspondence analysis model has many properties that can be useful for prediction analysis in contingency tables. Predictability measures are introduced to identify the categories of the response variable that can be best predicted, as well as the categories of the explanatory variable having the highest predictability power. We describe the interpretation of model parameters in two examples. In the end, we discuss the relations of nonsymmetric correspondence analysis with other reduced-rank models.  相似文献   

20.
A probabilistic expert system provides a graphical representation of a joint probability distribution which can be used to simplify and localize calculations. Jensenet al. (1990) introduced a flow-propagation algorithm for calculating marginal and conditional distributions in such a system. This paper analyses that algorithm in detail, and shows how it can be modified to perform other tasks, including maximization of the joint density and simultaneous fast retraction of evidence entered on several variables.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号