首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 656 毫秒
1.
The Tweedie compound Poisson distribution is a subclass of the exponential dispersion family with a power variance function, in which the value of the power index lies in the interval (1,2). It is well known that the Tweedie compound Poisson density function is not analytically tractable, and numerical procedures that allow the density to be accurately and fast evaluated did not appear until fairly recently. Unsurprisingly, there has been little statistical literature devoted to full maximum likelihood inference for Tweedie compound Poisson mixed models. To date, the focus has been on estimation methods in the quasi-likelihood framework. Further, Tweedie compound Poisson mixed models involve an unknown variance function, which has a significant impact on hypothesis tests and predictive uncertainty measures. The estimation of the unknown variance function is thus of independent interest in many applications. However, quasi-likelihood-based methods are not well suited to this task. This paper presents several likelihood-based inferential methods for the Tweedie compound Poisson mixed model that enable estimation of the variance function from the data. These algorithms include the likelihood approximation method, in which both the integral over the random effects and the compound Poisson density function are evaluated numerically; and the latent variable approach, in which maximum likelihood estimation is carried out via the Monte Carlo EM algorithm, without the need for approximating the density function. In addition, we derive the corresponding Markov Chain Monte Carlo algorithm for a Bayesian formulation of the mixed model. We demonstrate the use of the various methods through a numerical example, and conduct an array of simulation studies to evaluate the statistical properties of the proposed estimators.  相似文献   

2.
ABSTRACT

In this paper two probability distributions are analyzed which are formed by compounding inverse Weibull with zero-truncated Poisson and geometric distributions. The distributions can be used to model lifetime of series system where the lifetimes follow inverse Weibull distribution and the subgroup size being random follows either geometric or zero-truncated Poisson distribution. Some of the important statistical and reliability properties of each of the distributions are derived. The distributions are found to exhibit both monotone and non-monotone failure rates. The parameters of the distributions are estimated using the expectation-maximization algorithm and the method of minimum distance estimation. The potentials of the distributions are explored through three real life data sets and are compared with similar compounded distributions, viz. Weibull-geometric, Weibull-Poisson, exponential-geometric and exponential-Poisson distributions.  相似文献   

3.
Frailty models are often used to model heterogeneity in survival analysis. The most common frailty model has an individual intensity which is a product of a random factor and a basic intensity common to all individuals. This paper uses the compound Poisson distribution as the random factor. It allows some individuals to be non-susceptible, which can be useful in many settings. In some diseases, one may suppose that a number of families have an increased susceptibility due to genetic circumstances. Then, it is logical to use a frailty model where the individuals within each family have some shared factor, while individuals between families have different factors. This can be attained by randomizing the Poisson parameter in the compound Poisson distribution. To our knowledge, this is a new distribution. The power variance function distributions are used for the Poisson parameter. The subsequent appearing distributions are studied in some detail, both regarding appearance and various statistical properties. An application to infant mortality data from the Medical Birth Registry of Norway is included, where the model is compared to more traditional shared frailty models.  相似文献   

4.
X ) and the overall sampling size (M). If the later is of Poisson type with parameter λ, a sequence of M=m Bernouilli trials originates a compound binomial-Poisson random variable. The estimator of the proportion p is studied within this framework, and a numerical approximation can be obtained for its sampling distribution for any sample size. Received: November 6, 1997; revised version: August 14, 2000  相似文献   

5.
We propose a novel alternative to case-control sampling for the estimation of individual-level risk in spatial epidemiology. Our approach uses weighted estimating equations to estimate regression parameters in the intensity function of an inhomogeneous spatial point process, when information on risk-factors is available at the individual level for cases, but only at a spatially aggregated level for the population at risk. We develop data-driven methods to select the weights used in the estimating equations and show through simulation that the choice of weights can have a major impact on efficiency of estimation. We develop a formal test to detect non-Poisson behavior in the underlying point process and assess the performance of the test using simulations of Poisson and Poisson cluster point processes. We apply our methods to data on the spatial distribution of childhood meningococcal disease cases in Merseyside, U.K. between 1981 and 2007.  相似文献   

6.
This paper is concerned with the analysis of repeated measures count data overdispersed relative to a Poisson distribution, with the overdispersion possibly heterogeneous. To accommodate the overdispersion, the Poisson random variable is compounded with a gamma random variable, and both the mean of the Poisson and the variance of the gamma are modelled using log linear models. Maximum likelihood estimates (MLE) are then obtained. The paper also gives extended quasi-likelihood estimates for a more general class of compounding distributions which are shown to be approximations to the MLEs obtained for the gamma case. The theory is illustrated by modelling the determination of asbestos fibre intensity on membrane filters mounted on microscope slides.  相似文献   

7.
A representation of the innovation random variable for a gamma distributed first-order autoregressive process was found by Lawrance (1982) in the form of a compound Poisson distribution, connected with a shot-noise process. In this note we simplify the representation of Lawrance by providing a direct representation in terms of density functions.  相似文献   

8.
Stationary renewal point processes are defined by the probability distribution of the distances between successive points (lifetimes) that are independent and identically distributed random variables. For some applications it is also interesting to define the properties of a renewal process by using the renewal density. There are well-known expressions of this density in terms of the probability density of the lifetimes. It is more difficult to solve the inverse problem consisting in the determination of the density of the lifetimes in terms of the renewal density. Theoretical expressions between their Laplace transforms are available but the inversion of these transforms is often very difficult to obtain in closed form. We show that this is possible for renewal processes presenting a dead-time property characterized by the fact that the renewal density is zero in an interval including the origin. We present the principle of a recursive method allowing the solution of this problem and we apply this method to the case of some processes with input dead-time. Computer simulations on Poisson and Erlang (2) processes show quite good agreement between theoretical calculations and experimental measurements on simulated data.  相似文献   

9.
Computer simulations of point processes are important either to verify the results of certain theoretical calculations that can be very awkward at times or to obtain practical results when these calculations become almost impossible. One of the most common methods for the simulation of nonstationary Poisson processes is random thinning. Its extension when the intensity becomes random (doubly stochastic Poisson processes) depends on the structure of this intensity. If the random density takes only discrete values, which is a common situation in many physical problems where quantum mechanics introduces discrete states, it is shown that the thinning method can be applied without error. We study in particular the case of binary density and present the kind of theoretical calculations that then become possible. The results of various experiments realized with data obtained by simulation show a fairly good agreement with the theoretical calculations.  相似文献   

10.
We investigate transition law between consecutive observations of Ornstein–Uhlenbeck processes of infinite variation with tempered stable stationary distribution. Thanks to the Markov autoregressive structure, the transition law can be written in the exact sense as a convolution of three random components; a compound Poisson distribution and two independent tempered stable distributions, one with stability index in (0, 1) and the other with index in (1, 2). We discuss simulation techniques for those three random elements. With the exact transition law and proposed simulation techniques, sample paths simulation proves significantly more efficient, relative to the known approximative technique based on infinite shot noise series representation of tempered stable Lévy processes.  相似文献   

11.
This paper concerns the calculation of Bayes estimators of ratios of outcome proportions generated by the replication of an arbitrary tree-structured compound Bernoulli experiment under a multinomial-type sampling scheme. Here the compound Bernoulli experiment is treated as a collection of linear sequences of independent generalized Bernoulli trials having Dirichlet type 1 prior probability distributions. A method of obtaining a closed-form expression of the cumulative distribution function of the ratio of proportions – from its Meijer G-function representation – is described. Bayes point and interval estimators are directly obtained from the properties the distribution function as well as its related probability density function. In addition, the density function is used to derive the probability mass function of the predictive distribution any two associated outcome categories of the experiment – under an inverse multinomial-type sampling scheme. An illustrative numerical example concerning a Bayesian analysis of a simple tree-structured mortality model for medical patients who have suffered an acute myocardial infarction (heart attack) is also included.  相似文献   

12.
The linear calibration problem is considered. An exact formula for the mean squared error of the inverse estimator, involving expectations of functions of a Poisson random variable, is derived. The formula may be expressed in closed form if the number of observations in the calibration experiment is odd; for an even number of observations, the numerical evaluation of a simple integral or the use of a standard table of the confluent hypergeometric function is required. Previous expressions for the mean squared error have either been asymptotic expansions or estimates obtained by simulation.  相似文献   

13.
Many probability distributions can be represented as compound distributions. Consider some parameter vector as random. The compound distribution is the expected distribution of the variable of interest given the random parameters. Our idea is to define a partition of the domain of definition of the random parameters, so that we can represent the expected density of the variable of interest as a finite mixture of conditional densities. We then model the mixture probabilities of the conditional densities using information on population categories, thus modifying the original overall model. We thus obtain specific models for sub-populations that stem from the overall model. The distribution of a sub-population of interest is thus completely specified in terms of mixing probabilities. All characteristics of interest can be derived from this distribution and the comparison between sub-populations easily proceeds from the comparison of the mixing probabilities. A real example based on EU-SILC data is given. Then the methodology is investigated through simulation.  相似文献   

14.
In many settings it is useful to have bounds on the total variation distance between some random variable Z and its shifted version Z+1. For example, such quantities are often needed when applying Stein's method for probability approximation. This note considers one way in which such bounds can be derived, in cases where Z is either the equilibrium distribution of some birth-death process or the mixture of such a distribution. Applications of these bounds are given to translated Poisson and compound Poisson approximations for Poisson mixtures and the Pólya distribution.  相似文献   

15.
In many industrial quality control experiments and destructive stress testing, the only available data are successive minima (or maxima)i.e., record-breaking data. There are two sampling schemes used to collect record-breaking data: random sampling and inverse sampling. For random sampling, the total sample size is predetermined and the number of records is a random variable while in inverse-sampling the number of records to be observed is predetermined; thus the sample size is a random variable. The purpose of this papper is to determinevia simulations, which of the two schemes, if any, is more efficient. Since the two schemes are equivalent asymptotically, the simulations were carried out for small to moderate sized record-breaking samples. Simulated biases and mean square errors of the maximum likelihood estimators of the parameters using the two sampling schemes were compared. In general, it was found that if the estimators were well behaved, then there was no significant difference between the mean square errors of the estimates for the two schemes. However, for certain distributions described by both a shape and a scale parameter, random sampling led to estimators that were inconsistent. On the other hand, the estimated obtained from inverse sampling were always consistent. Moreover, for moderated sized record-breaking samples, the total sample size that needs to be observed is smaller for inverse sampling than for random sampling.  相似文献   

16.
We extend the problem of obtaining an estimator for the finite population mean parameter incorporating complete auxiliary information through calibration estimation in survey sampling under a functional data framework. The functional calibration sampling weights of the estimator are obtained by matching the calibration estimation problem with the maximum entropy on the mean – MEM – principle. In particular, the calibration estimation is viewed as an infinite-dimensional linear inverse problem following the structure of the MEM approach. We give a precise theoretical setting and estimate the functional calibration weights assuming, as prior measures, the centred Gaussian and compound Poisson random measures. Additionally, through a simple simulation study, we show that the proposed functional calibration estimator improves its accuracy compared with the Horvitz–Thompson one.  相似文献   

17.
Using a direct resampling process, a Bayesian approach is developed for the analysis of the shiftpoint problem. In many problems it is straight forward to isolate the marginal posterior distribution of the shift-point parameter and the conditional distribution of some of the parameters given the shift point and the other remaining parameters. When this is possible, a direct sampling approach is easily implemented whereby standard random number generators can be used to generate samples from the joint posterior distribution of aii the parameters in the model. This technique is illustrated with examples involving one shift for Poisson processes and regression models.  相似文献   

18.
This paper focusses on computing the Bayesian reliability of components whose performance characteristics (degradation – fatigue and cracks) are observed during a specified period of time. Depending upon the nature of degradation data collected, we fit a monotone increasing or decreasing function for the data. Since the components are supposed to have different lifetimes, the rate of degradation is assumed to be a random variable. At a critical level of degradation, the time to failure distribution is obtained. The exponential and power degradation models are studied and exponential density function is assumed for the random variable representing the rate of degradation. The maximum likelihood estimator and Bayesian estimator of the parameter of exponential density function, predictive distribution, hierarchical Bayes approach and robustness of the posterior mean are presented. The Gibbs sampling algorithm is used to obtain the Bayesian estimates of the parameter. Illustrations are provided for the train wheel degradation data.  相似文献   

19.
In count data models, overdispersion of the dependent variable can be incorporated into the model if a heterogeneity term is added into the mean parameter of the Poisson distribution. We use a nonparametric estimation for the heterogeneity density based on a squared Kth-order polynomial expansion, that we generalize for panel data. A numerical illustration using an insurance dataset is discussed. Even if some statistical analyses showed no clear differences between these new models and the standard Poisson with gamma random effects, we show that the choice of the random effects distribution has a significant influence for interpreting our results.  相似文献   

20.
In outcome‐dependent sampling, the continuous or binary outcome variable in a regression model is available in advance to guide selection of a sample on which explanatory variables are then measured. Selection probabilities may either be a smooth function of the outcome variable or be based on a stratification of the outcome. In many cases, only data from the final sample is accessible to the analyst. A maximum likelihood approach for this data configuration is developed here for the first time. The likelihood for fully general outcome‐dependent designs is stated, then the special case of Poisson sampling is examined in more detail. The maximum likelihood estimator differs from the well‐known maximum sample likelihood estimator, and an information bound result shows that the former is asymptotically more efficient. A simulation study suggests that the efficiency difference is generally small. Maximum sample likelihood estimation is therefore recommended in practice when only sample data is available. Some new smooth sample designs show considerable promise.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号