首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The bipolar Watson distribution is frequently used for modeling axial data. We extend the one-way analysis of variance based on this distribution to a two-way layout. We illustrate the method with directional data in three dimensions  相似文献   

2.
In this article, a non-iterative posterior sampling algorithm for linear quantile regression model based on the asymmetric Laplace distribution is proposed. The algorithm combines the inverse Bayes formulae, sampling/importance resampling, and the expectation maximization algorithm to obtain independently and identically distributed samples approximately from the observed posterior distribution, which eliminates the convergence problems in the iterative Gibbs sampling and overcomes the difficulty in evaluating the standard deviance in the EM algorithm. The numeric results in simulations and application to the classical Engel data show that the non-iterative sampling algorithm is more effective than the Gibbs sampling and EM algorithm.  相似文献   

3.
The Watson distribution is frequently used for modeling axial data. We propose the two-way analysis of variance for a concentrated Watson distribution defined on the hypersphere in the girdle or bipolar form. We illustrate this technique with spherical data.  相似文献   

4.
This paper primarily is concerned with the sampling of the Fisher–Bingham distribution and we describe a slice sampling algorithm for doing this. A by-product of this task gave us an infinite mixture representation of the Fisher–Bingham distribution; the mixing distributions being based on the Dirichlet distribution. Finite numerical approximations are considered and a sampling algorithm based on a finite mixture approximation is compared with the slice sampling algorithm.  相似文献   

5.
COM-Poisson regression is an increasingly popular model for count data. Its main advantage is that it permits to model separately the mean and the variance of the counts, thus allowing the same covariate to affect in different ways the average level and the variability of the response variable. A key limiting factor to the use of the COM-Poisson distribution is the calculation of the normalisation constant: its accurate evaluation can be time-consuming and is not always feasible. We circumvent this problem, in the context of estimating a Bayesian COM-Poisson regression, by resorting to the exchange algorithm, an MCMC method applicable to situations where the sampling model (likelihood) can only be computed up to a normalisation constant. The algorithm requires to draw from the sampling model, which in the case of the COM-Poisson distribution can be done efficiently using rejection sampling. We illustrate the method and the benefits of using a Bayesian COM-Poisson regression model, through a simulation and two real-world data sets with different levels of dispersion.  相似文献   

6.
One method of controlling the quality of incoming lots is through attribute sampling. To simultaneously control several (possibly dependent) attributes, properly chosen single attribute sampling plans can be merged into a multiple attribute sampling plan. The general form of such a plan is given and various alternatives are discussed. The multinomial distribution is used to develop formulae necessary for an analysis of a multiple attribute plan. Due to the lengthy nature of the calculations involved, a computer algorithm is outlined.  相似文献   

7.
Layer Sampling     
Layer sampling is an algorithm for generating variates from a non-normalized multidimensional distribution p( · ). It empirically constructs a majorizing function for p( · ) from a sequence of layers. The method first selects a layer based on the previous variate. Next, a sample is drawn from the selected layer, using a method such as Rejection Sampling. Layer sampling is regenerative. At regeneration times, the layers may be adapted to increase mixing of the Markov chain. Layer sampling may also be used to estimate arbitrary integrals, including normalizing constants.  相似文献   

8.
From the exact distribution of the maximum likelihood estimator of the average lifetime based on progressive hybrid exponential censored sample, we derive an explicit expression for the Bayes risk of a sampling plan when a quadratic loss function is used. The simulated annealing algorithm is then used to determine the optimal sampling plan. Some optimal Bayes solutions under progressive hybrid and ordinary hybrid censoring schemes are presented to illustrate the effectiveness of the proposed method.  相似文献   

9.
The Watson distribution is one of the most used distributions for modeling axial data. In some situations, it is important to investigate if several Watson populations differ significantly. In this paper, we develop likelihood ratio tests and the ANOVA for testing the hypothesis of the equality of the directional parameters of several Watson distributions with different concentrations. We also determine the empirical power of the ANOVA and LR tests for some dimensions of the sphere.  相似文献   

10.
An algorithm for sampling from non-log-concave multivariate distributions is proposed, which improves the adaptive rejection Metropolis sampling (ARMS) algorithm by incorporating the hit and run sampling. It is not rare that the ARMS is trapped away from some subspace with significant probability in the support of the multivariate distribution. While the ARMS updates samples only in the directions that are parallel to dimensions, our proposed method, the hit and run ARMS (HARARMS), updates samples in arbitrary directions determined by the hit and run algorithm, which makes it almost not possible to be trapped in any isolated subspaces. The HARARMS performs the same as ARMS in a single dimension while more reliable in multidimensional spaces. Its performance is illustrated by a Bayesian free-knot spline regression example. We showed that it overcomes the well-known ‘lethargy’ property and decisively find the global optimal number and locations of the knots of the spline function.  相似文献   

11.
In this paper, we discuss a fully Bayesian quantile inference using Markov Chain Monte Carlo (MCMC) method for longitudinal data models with random effects. Under the assumption of error term subject to asymmetric Laplace distribution, we establish a hierarchical Bayesian model and obtain the posterior distribution of unknown parameters at τ-th level. We overcome the current computational limitations using two approaches. One is the general MCMC technique with Metropolis–Hastings algorithm and another is the Gibbs sampling from the full conditional distribution. These two methods outperform the traditional frequentist methods under a wide array of simulated data models and are flexible enough to easily accommodate changes in the number of random effects and in their assumed distribution. We apply the Gibbs sampling method to analyse a mouse growth data and some different conclusions from those in the literatures are obtained.  相似文献   

12.
As the Watson distribution is frequently used for modeling axial data, it is important to investigate the existence of possible outliers in samples from this distribution. Then, we develop for the bipolar Watson distribution defined on the hypersphere, some tests of discordancy of an outlier or several outliers en bloc based on the likelihood ratio, supposing an alternative model of contamination of slippage type. We evaluate the performance of these tests of discordancy of an outlier and we also compare some tests of discordancy of an outlier available for this distribution.  相似文献   

13.
R. Martínez  M. Mota 《Statistics》2013,47(4):367-378
For a controlled branching process (CBP) with offspring distribution belonging to the power series family, the asymptotic normality of the posterior distribution of the basic parameter and the offspring mean is proved. As practical applications, we calculate asymptotic high probability density credibility sets for the offspring mean and we provide a rule to make inference about the value of this parameter. Moreover, the asymptotic posterior normality of the respective parameters of two classical branching models, namely the standard Galton–Watson process and the Galton–Watson process with immigration, is derived as particular cases of the CBP.  相似文献   

14.
The HastingsMetropolis algorithm is a general MCMC method for sampling from a density known up to a constant. Geometric convergence of this algorithm has been proved under conditions relative to the instrumental (or proposal) distribution. We present an inhomogeneous HastingsMetropolis algorithm for which the proposal density approximates the target density, as the number of iterations increases. The proposal density at the n th step is a non-parametric estimate of the density of the algorithm, and uses an increasing number of i.i.d. copies of the Markov chain. The resulting algorithm converges (in n ) geometrically faster than a HastingsMetropolis algorithm with any fixed proposal distribution. The case of a strictly positive density with compact support is presented first, then an extension to more general densities is given. We conclude by proposing a practical way of implementation for the algorithm, and illustrate it over simulated examples.  相似文献   

15.
Gibbs sampler as a computer-intensive algorithm is an important statistical tool both in application and in theoretical work. This algorithm, in many cases, is time-consuming; this paper extends the concept of using the steady-state ranked simulated sampling approach, utilized in Monte Carlo methods by Samawi [On the approximation of multiple integrals using steady state ranked simulated sampling, 2010, submitted for publication], to improve the well-known Gibbs sampling algorithm. It is demonstrated that this approach provides unbiased estimators, in the case of estimating the means and the distribution function, and substantially improves the performance of the Gibbs sampling algorithm and convergence, which results in a significant reduction in the costs and time required to attain a certain level of accuracy. Similar to Casella and George [Explaining the Gibbs sampler, Am. Statist. 46(3) (1992), pp. 167–174], we provide some analytical properties in simple cases and compare the performance of our method using the same illustrations.  相似文献   

16.
This paper considers the statistical analysis for competing risks model under the Type-I progressively hybrid censoring from a Weibull distribution. We derive the maximum likelihood estimates and the approximate maximum likelihood estimates of the unknown parameters. We then use the bootstrap method to construct the confidence intervals. Based on the non informative prior, a sampling algorithm using the acceptance–rejection sampling method is presented to obtain the Bayes estimates, and Monte Carlo method is employed to construct the highest posterior density credible intervals. The simulation results are provided to show the effectiveness of all the methods discussed here and one data set is analyzed.  相似文献   

17.
Consider the exchangeable Bayesian hierarchical model where observations yi are independently distributed from sampling densities with unknown means, the means µi, are a random sample from a distribution g, and the parameters of g are assigned a known distribution h. A simple algorithm is presented for summarizing the posterior distribution based on Gibbs sampling and the Metropolis algorithm. The software program Matlab is used to implement the algorithm and provide a graphical output analysis. An binomial example is used to illustrate the flexibility of modeling possible using this algorithm. Methods of model checking and extensions to hierarchical regression modeling are discussed.  相似文献   

18.
The likelihood ratio method is used to construct a confidence interval for a population mean when sampling from a population with certain characteristics found in many applications, such as auditing. Specifically, a sample taken from this type of population usually consists of a very large number of zero values, plus a small number of nonzero values that follow some continuous distribution. In this situation, the traditional confidence interval constructed for the population mean is known to be unreliable. This article derives confidence intervals based on the likelihood-ratio-test approach by assuming (1) a normal distribution (normal algorithm) and (2) an exponential distribution (exponential algorithm). Because the error population distribution is usually unknown, it is important to study the robustness of the proposed procedures. We perform an extensive simulation study to compare the percentage of confidence intervals containing the true population mean using the two proposed algorithms with the percentage obtained from the traditional method based on the central limit theorem. It is shown that the normal algorithm is the most robust procedure against many different distributional error assumptions.  相似文献   

19.
The generalized inverse Gaussian distribution has become quite popular in financial engineering. The most popular random variate generator is due to Dagpunar (Commun. Stat., Simul. Comput. 18:703–710, 1989). It is an acceptance-rejection algorithm method based on the Ratio-of-Uniforms method. However, it is not uniformly fast as it has a prohibitive large rejection constant when the distribution is close to the gamma distribution. Recently some papers have discussed universal methods that are suitable for this distribution. However, these methods require an expensive setup and are therefore not suitable for the varying parameter case which occurs in, e.g., Gibbs sampling. In this paper we analyze the performance of Dagpunar’s algorithm and combine it with a new rejection method which ensures a uniformly fast generator. As its setup is rather short it is in particular suitable for the varying parameter case.  相似文献   

20.
We present a maximum likelihood estimation procedure for the multivariate frailty model. The estimation is based on a Monte Carlo EM algorithm. The expectation step is approximated by averaging over random samples drawn from the posterior distribution of the frailties using rejection sampling. The maximization step reduces to a standard partial likelihood maximization. We also propose a simple rule based on the relative change in the parameter estimates to decide on sample size in each iteration and a stopping time for the algorithm. An important new concept is acquiring absolute convergence of the algorithm through sample size determination and an efficient sampling technique. The method is illustrated using a rat carcinogenesis dataset and data on vase lifetimes of cut roses. The estimation results are compared with approximate inference based on penalized partial likelihood using these two examples. Unlike the penalized partial likelihood estimation, the proposed full maximum likelihood estimation method accounts for all the uncertainty while estimating standard errors for the parameters.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号