首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
A hierarchical model for extreme wind speeds   总被引:3,自引:0,他引:3  
Summary.  A typical extreme value analysis is often carried out on the basis of simplistic inferential procedures, though the data being analysed may be structurally complex. Here we develop a hierarchical model for hourly gust maximum wind speed data, which attempts to identify site and seasonal effects for the marginal densities of hourly maxima, as well as for the serial dependence at each location. A Gaussian model for the random effects exploits the meteorological structure in the data, enabling increased precision for inferences at individual sites and in individual seasons. The Bayesian framework that is adopted is also exploited to obtain predictive return level estimates at each site, which incorporate uncertainty due to model estimation, as well as the randomness that is inherent in the processes that are involved.  相似文献   

2.
Estimates of the largest wind gust that will occur at a given location over a specified period are required by civil engineers. Estimation is usually based on models which are derived from the limiting distributions of maxima of stationary time series and which are fitted to data on extreme gusts. In this paper we develop a model for maximum gusts which also incorporates data on hourly mean speeds through a distributional relationship between maxima and means. This joint model is closely linked to the physical processes which generate the most extreme values and thus provides a mechanism by which data on means can augment those on gusts. It is argued that this increases the credibility of extrapolation in estimates of long period return gusts. The model is shown to provide a good fit to data obtained at a location in northern England and is compared with a more traditional modelling approach, which also performs well for this site.  相似文献   

3.
Estimates of extreme winds are essential for engineering design, but in preparing such estimates major statistical issues are encountered. In this case study, the analysts were provided with hourly readings on wind speed, wind direction, and barometric pressure at five Canadian stations for observation periods ranging over several recent decades. Their assignment was to calculate point and interval estimates of 10-, 20-, 50-, and 100-year return values (i.e., upper fractiles) for the wind speeds at these stations.  相似文献   

4.
In this article, we propose Bayesian methodology to obtain parameter estimates of the mixture of distributions belonging to the normal and biparametric Weibull families, modeling the mean and the variance parameters. Simulated studies and applications show the performance of the proposed models.  相似文献   

5.
Modelling count data with overdispersion and spatial effects   总被引:1,自引:1,他引:0  
In this paper we consider regression models for count data allowing for overdispersion in a Bayesian framework. We account for unobserved heterogeneity in the data in two ways. On the one hand, we consider more flexible models than a common Poisson model allowing for overdispersion in different ways. In particular, the negative binomial and the generalized Poisson (GP) distribution are addressed where overdispersion is modelled by an additional model parameter. Further, zero-inflated models in which overdispersion is assumed to be caused by an excessive number of zeros are discussed. On the other hand, extra spatial variability in the data is taken into account by adding correlated spatial random effects to the models. This approach allows for an underlying spatial dependency structure which is modelled using a conditional autoregressive prior based on Pettitt et al. in Stat Comput 12(4):353–367, (2002). In an application the presented models are used to analyse the number of invasive meningococcal disease cases in Germany in the year 2004. Models are compared according to the deviance information criterion (DIC) suggested by Spiegelhalter et al. in J R Stat Soc B64(4):583–640, (2002) and using proper scoring rules, see for example Gneiting and Raftery in Technical Report no. 463, University of Washington, (2004). We observe a rather high degree of overdispersion in the data which is captured best by the GP model when spatial effects are neglected. While the addition of spatial effects to the models allowing for overdispersion gives no or only little improvement, spatial Poisson models with spatially correlated or uncorrelated random effects are to be preferred over all other models according to the considered criteria.  相似文献   

6.
7.
Let a group G act on the sample space. This paper gives another proof of a theorem of Stein relating a group invariant family of posterior Bayesian probability regions to classical confidence regions when an appropriate prior is used. The example of the central multivariate normal distribution is discussed.  相似文献   

8.
The authors consider the issue of map positional error, or the difference between location as represented in a spatial database (i.e., a map) and the corresponding unobservable true location. They propose a fully model‐based approach that incorporates aspects of the map registration process commonly performed by users of geographic informations systems, including rubber‐sheeting. They explain how estimates of positional error can be obtained, hence estimates of true location. They show that with multiple maps of varying accuracy along with ground truthing data, suitable model averaging offers a strategy for using all of the maps to learn about true location.  相似文献   

9.
Statistical models are sometimes incorporated into computer software for making predictions about future observations. When the computer model consists of a single statistical model this corresponds to estimation of a function of the model parameters. This paper is concerned with the case that the computer model implements multiple, individually-estimated statistical sub-models. This case frequently arises, for example, in models for medical decision making that derive parameter information from multiple clinical studies. We develop a method for calculating the posterior mean of a function of the parameter vectors of multiple statistical models that is easy to implement in computer software, has high asymptotic accuracy, and has a computational cost linear in the total number of model parameters. The formula is then used to derive a general result about posterior estimation across multiple models. The utility of the results is illustrated by application to clinical software that estimates the risk of fatal coronary disease in people with diabetes.  相似文献   

10.
In this paper we present Bayesian analysis of finite mixtures of multivariate Poisson distributions with an unknown number of components. The multivariate Poisson distribution can be regarded as the discrete counterpart of the multivariate normal distribution, which is suitable for modelling multivariate count data. Mixtures of multivariate Poisson distributions allow for overdispersion and for negative correlations between variables. To perform Bayesian analysis of these models we adopt a reversible jump Markov chain Monte Carlo (MCMC) algorithm with birth and death moves for updating the number of components. We present results obtained from applying our modelling approach to simulated and real data. Furthermore, we apply our approach to a problem in multivariate disease mapping, namely joint modelling of diseases with correlated counts.  相似文献   

11.
If at least one out of two serial machines that produce a specific product in manufacturing environments malfunctions, there will be non conforming items produced. Determining the optimal time of the machines' maintenance is the one of major concerns. While a convenient common practice for this kind of problem is to fit a single probability distribution to the combined defect data, it does not adequately capture the fact that there are two different underlying causes of failures. A better approach is to view the defects as arising from a mixture population: one due to the first machine failures and the other due to the second one. In this article, a mixture model along with both Bayesian inference and stochastic dynamic programming approaches are used to find the multi-stage optimal replacement strategy. Using the posterior probability of the machines to be in state λ1, λ2 (the failure rates of defective items produced by machine 1 and 2, respectively), we first formulate the problem as a stochastic dynamic programming model. Then, we derive some properties for the optimal value of the objective function and propose a solution algorithm. At the end, the application of the proposed methodology is demonstrated by a numerical example and an error analysis is performed to evaluate the performances of the proposed procedure. The results of this analysis show that the proposed method performs satisfactorily when a different number of observations on the times between productions of defective products is available.  相似文献   

12.
In this article, it is shown that many intractable problems of Bayesian inference can be cast in a form called “artificial augmenting regression” in which application of Markov Chain Monte Carlo techniques, especially Gibbs sampling with data augmentation, is rather convenient. The new techniques are illustrated using several challenging statistical problems and numerical results are presented.  相似文献   

13.
It is well known that long-term exposure to high levels of pollution is hazardous to human health. Therefore, it is important to study and understand the behavior of pollutants in general. In this work, we study the occurrence of a pollutant concentration's surpassing a given threshold (an exceedance) as well as the length of time that the concentration stays above it. A general N(t)/D/1 queueing model is considered to jointly analyze those problems. A non-homogeneous Poisson process is used to model the arrivals of clusters of exceedances. Geometric and generalized negative binomial distributions are used to model the amount of time (cluster size) that the pollutant concentration stays above the threshold. A mixture model is also used for the cluster size distribution. The rate function of the non-homogeneous Poisson process is assumed to be of either the Weibull or the Musa–Okumoto type. The selection of the model that best fits the data is performed using the Bayes discrimination method and the sum of absolute differences as well as using a graphical criterion. Results are applied to the daily maximum ozone measurements provided by the monitoring network of the Metropolitan Area of Mexico City.  相似文献   

14.
Sow farm management requires appropriate methods to forecast the sow population structure evolution. We describe two models for such purpose. The first is a semi-Markov process model, used for long-term predictions and strategic management. The second is a state-space model for continuous proportions, used for short-term predictions and operational management.  相似文献   

15.
We consider Dirichlet process mixture models in which the observed clusters in any particular dataset are not viewed as belonging to a finite set of possible clusters but rather as representatives of a latent structure in which objects belong to one of a potentially infinite number of clusters. As more information is revealed the number of inferred clusters is allowed to grow. The precision parameter of the Dirichlet process is a crucial parameter that controls the number of clusters. We develop a framework for the specification of the hyperparameters associated with the prior for the precision parameter that can be used both in the presence or absence of subjective prior information about the level of clustering. Our approach is illustrated in an analysis of clustering brands at the magazine Which?. The results are compared with the approach of Dorazio (2009) via a simulation study.  相似文献   

16.
We will pursue a Bayesian nonparametric approach in the hierarchical mixture modelling of lifetime data in two situations: density estimation, when the distribution is a mixture of parametric densities with a nonparametric mixing measure, and accelerated failure time (AFT) regression modelling, when the same type of mixture is used for the distribution of the error term. The Dirichlet process is a popular choice for the mixing measure, yielding a Dirichlet process mixture model for the error; as an alternative, we also allow the mixing measure to be equal to a normalized inverse-Gaussian prior, built from normalized inverse-Gaussian finite dimensional distributions, as recently proposed in the literature. Markov chain Monte Carlo techniques will be used to estimate the predictive distribution of the survival time, along with the posterior distribution of the regression parameters. A comparison between the two models will be carried out on the grounds of their predictive power and their ability to identify the number of components in a given mixture density.  相似文献   

17.
Summary.  Longitudinal modelling of lung function in Duchenne's muscular dystrophy is complicated by a mixture of both growth and decline in lung function within each subject, an unknown point of separation between these phases and significant heterogeneity between individual trajectories. Linear mixed effects models can be used, assuming a single changepoint for all cases; however, this assumption may be incorrect. The paper describes an extension of linear mixed effects modelling in which random changepoints are integrated into the model as parameters and estimated by using a stochastic EM algorithm. We find that use of this 'mixture modelling' approach improves the fit significantly.  相似文献   

18.
In this article we propose mixture of distributions belonging to the biparametric exponential family, considering joint modeling of the mean and variance (or dispersion) parameters. As special cases we consider mixtures of normal and gamma distributions. A novel Bayesian methodology, using Markov Chain Monte Carlo (MCMC) methods, is proposed to obtain the posterior summaries of interest. We include simulations and real data examples to illustrate de performance of the proposal.  相似文献   

19.
The evaluation of DNA evidence in pedigrees requiring population inference   总被引:1,自引:0,他引:1  
Summary. The evaluation of nuclear DNA evidence for identification purposes is performed here taking account of the uncertainty about population parameters. Graphical models are used to detail the hypotheses being debated in a trial with the aim of obtaining a directed acyclic graph. Graphs also clarify the set of evidence that contributes to population inferences and they also describe the conditional independence structure of DNA evidence. Numerical illustrations are provided by re-examining three case-studies taken from the literature. Our calculations of the weight of evidence differ from those given by the authors of case-studies in that they reveal more conservative values.  相似文献   

20.
The label-switching problem is one of the fundamental problems in Bayesian mixture analysis. Using all the Markov chain Monte Carlo samples as the initials for the expectation-maximization (EM) algorithm, we propose to label the samples based on the modes they converge to. Our method is based on the assumption that the samples converged to the same mode have the same labels. If a relative noninformative prior is used or the sample size is large, the posterior will be close to the likelihood and then the posterior modes can be located approximately by the EM algorithm for mixture likelihood, without assuming the availability of the closed form of the posterior. In order to speed up the computation of this labeling method, we also propose to first cluster the samples by K-means with a large number of clusters K. Then, by assuming that the samples within each cluster have the same labels, we only need to find one converged mode for each cluster. Using a Monte Carlo simulation study and a real dataset, we demonstrate the success of our new method in dealing with the label-switching problem.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号