首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The paper develops some objective priors for the common mean in the one-way random effects model with heterogeneous error variances. We derive the first and second order matching priors and reference priors. It turns out that the second order matching prior matches the alternative coverage probabilities up to the second order, and is also an HPD matching prior. However, derived reference priors just satisfy a first order matching criterion. Our simulation studies indicate that the second order matching prior performs better than the reference prior and the Jeffreys prior in terms of matching the target coverage probabilities in a frequentist sense. We also illustrate our results using real data.  相似文献   

2.
Non-parametric Bayesian Estimation of a Spatial Poisson Intensity   总被引:5,自引:0,他引:5  
A method introduced by Arjas & Gasbarra (1994) and later modified by Arjas & Heikkinen (1997) for the non-parametric Bayesian estimation of an intensity on the real line is generalized to cover spatial processes. The method is based on a model approximation where the approximating intensities have the structure of a piecewise constant function. Random step functions on the plane are generated using Voronoi tessellations of random point patterns. Smoothing between nearby intensity values is applied by means of a Markov random field prior in the spirit of Bayesian image analysis. The performance of the method is illustrated in examples with both real and simulated data.  相似文献   

3.
We discuss a Bayesian formalism which gives rise to a type of wavelet threshold estimation in nonparametric regression. A prior distribution is imposed on the wavelet coefficients of the unknown response function, designed to capture the sparseness of wavelet expansion that is common to most applications. For the prior specified, the posterior median yields a thresholding procedure. Our prior model for the underlying function can be adjusted to give functions falling in any specific Besov space. We establish a relationship between the hyperparameters of the prior model and the parameters of those Besov spaces within which realizations from the prior will fall. Such a relationship gives insight into the meaning of the Besov space parameters. Moreover, the relationship established makes it possible in principle to incorporate prior knowledge about the function's regularity properties into the prior model for its wavelet coefficients. However, prior knowledge about a function's regularity properties might be difficult to elicit; with this in mind, we propose a standard choice of prior hyperparameters that works well in our examples. Several simulated examples are used to illustrate our method, and comparisons are made with other thresholding methods. We also present an application to a data set that was collected in an anaesthesiological study.  相似文献   

4.
In this article we propose a new method to select a discrete model f(x; θ), based on the conditional density of a sample given the value of a sufficient statistic for θ. The main idea is to work with a broad family of discrete distributions, called the family of power series distribution, for which there is a common sufficient statistic for the parameter of interest. The proposed method uses the maximum conditional density in order to select the best model.

We compare our proposal with the usual methodology based on Bayes factors. We provide several examples that show that our proposal works fine in most instances. Bayes factors are strongly dependent on the prior information about the parameters. Since our method does not require the specification of a prior distribution, it provides a useful alternative to Bayes factors.  相似文献   

5.
In this paper, we consider some noninformative priors for the common mean in a bivariate normal population. We develop the first-order and second-order matching priors and reference priors. We find that the second-order matching prior is also an HPD matching prior, and matches the alternative coverage probabilities up to the second order. It turns out that derived reference priors do not satisfy a second-order matching criterion. Our simulation study indicates that the second-order matching prior performs better than the reference priors in terms of matching the target coverage probabilities in a frequentist sense. We also illustrate our results using real data.  相似文献   

6.
Statistical meta‐analysis is mostly carried out with the help of the random effect normal model, including the case of discrete random variables. We argue that the normal approximation is not always able to adequately capture the underlying uncertainty of the original discrete data. Furthermore, when we examine the influence of the prior distributions considered, in the presence of rare events, the results from this approximation can be very poor. In order to assess the robustness of the quantities of interest in meta‐analysis with respect to the choice of priors, this paper proposes an alternative Bayesian model for binomial random variables with several zero responses. Particular attention is paid to the coherence between the prior distributions of the study model parameters and the meta‐parameter. Thus, our method introduces a simple way to examine the sensitivity of these quantities to the structure dependence selected for study. For illustrative purposes, an example with real data is analysed, using the proposed Bayesian meta‐analysis model for binomial sparse data. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

7.
We oresent a criterion fnr the optimal aggreation of industries in input-output analysis, when the aggregation is wade prior to data collection. This criterion is based on a model assuming -hat the input-output coefficients are used in. some decision process, and that they are unknown (or random) prior to data collection. We show that our criterion, based on decision theoretic considerations, differs considerably from traditional criteria for good aggregation. Our model is also applide as an example, to the Netherlands economy.  相似文献   

8.
We discuss the problem of selecting among alternative parametric models within the Bayesian framework. For model selection problems, which involve non‐nested models, the common objective choice of a prior on the model space is the uniform distribution. The same applies to situations where the models are nested. It is our contention that assigning equal prior probability to each model is over simplistic. Consequently, we introduce a novel approach to objectively determine model prior probabilities, conditionally, on the choice of priors for the parameters of the models. The idea is based on the notion of the worth of having each model within the selection process. At the heart of the procedure is the measure of this worth using the Kullback–Leibler divergence between densities from different models.  相似文献   

9.
Multivariate mixture regression models can be used to investigate the relationships between two or more response variables and a set of predictor variables by taking into consideration unobserved population heterogeneity. It is common to take multivariate normal distributions as mixing components, but this mixing model is sensitive to heavy-tailed errors and outliers. Although normal mixture models can approximate any distribution in principle, the number of components needed to account for heavy-tailed distributions can be very large. Mixture regression models based on the multivariate t distributions can be considered as a robust alternative approach. Missing data are inevitable in many situations and parameter estimates could be biased if the missing values are not handled properly. In this paper, we propose a multivariate t mixture regression model with missing information to model heterogeneity in regression function in the presence of outliers and missing values. Along with the robust parameter estimation, our proposed method can be used for (i) visualization of the partial correlation between response variables across latent classes and heterogeneous regressions, and (ii) outlier detection and robust clustering even under the presence of missing values. We also propose a multivariate t mixture regression model using MM-estimation with missing information that is robust to high-leverage outliers. The proposed methodologies are illustrated through simulation studies and real data analysis.  相似文献   

10.
We consider the fitting of a Bayesian model to grouped data in which observations are assumed normally distributed around group means that are themselves normally distributed, and consider several alternatives for accommodating the possibility of heteroscedasticity within the data. We consider the case where the underlying distribution of the variances is unknown, and investigate several candidate prior distributions for those variances. In each case, the parameters of the candidate priors (the hyperparameters) are themselves given uninformative priors (hyperpriors). The most mathematically convenient model for the group variances is to assign them inverse gamma distributed priors, the inverse gamma distribution being the conjugate prior distribution for the unknown variance of a normal population. We demonstrate that for a wide class of underlying distributions of the group variances, a model that assigns the variances an inverse gamma-distributed prior displays favorable goodness-of-fit properties relative to other candidate priors, and hence may be used as standard for modeling such data. This allows us to take advantage of the elegant mathematical property of prior conjugacy in a wide variety of contexts without compromising model fitness. We test our findings on nine real world publicly available datasets from different domains, and on a wide range of artificially generated datasets.  相似文献   

11.
The paper develops a systematic estimation and inference procedure for quantile regression models where there may exist a common threshold effect across different quantile indices. We first propose a sup-Wald test for the existence of a threshold effect, and then study the asymptotic properties of the estimators in a threshold quantile regression model under the shrinking threshold effect framework. We consider several tests for the presence of a common threshold value across different quantile indices and obtain their limiting distributions. We apply our methodology to study the pricing strategy for reputation through the use of a data set from Taobao.com. In our economic model, an online seller maximizes the sum of the profit from current sales and the possible future gain from a targeted higher reputation level. We show that the model can predict a jump in optimal pricing behavior, which is considered as “reputation effect” in this paper. The use of threshold quantile regression model allows us to identify and explore the reputation effect and its heterogeneity in data. We find both reputation effects and common thresholds for a range of quantile indices in seller’s pricing strategy in our application.  相似文献   

12.
An important aspect of paired comparison experiments is the decision of how to form pairs in advance of collecting data. A weakness of typical paired comparison experimental designs is the difficulty in incorporating prior information, which can be particularly relevant for the design of tournament schedules for players of games and sports. Pairing methods that make use of prior information are often ad hoc algorithms with little or no formal basis. The problem of pairing objects can be formalized as a Bayesian optimal design. Assuming a linear paired comparison model for outcomes, we develop a pairing method that maximizes the expected gain in Kullback–Leibler information from the prior to the posterior distribution. The optimal pairing is determined using a combinatorial optimization method commonly used in graph-theoretic contexts. We discuss the properties of our optimal pairing criterion, and demonstrate our method as an adaptive procedure for pairing objects multiple times. We compare the performance of our method on simulated data against random pairings, and against a system that is currently in use in tournament chess.  相似文献   

13.
Time-varying parameter models with stochastic volatility are widely used to study macroeconomic and financial data. These models are almost exclusively estimated using Bayesian methods. A common practice is to focus on prior distributions that themselves depend on relatively few hyperparameters such as the scaling factor for the prior covariance matrix of the residuals governing time variation in the parameters. The choice of these hyperparameters is crucial because their influence is sizeable for standard sample sizes. In this article, we treat the hyperparameters as part of a hierarchical model and propose a fast, tractable, easy-to-implement, and fully Bayesian approach to estimate those hyperparameters jointly with all other parameters in the model. We show via Monte Carlo simulations that, in this class of models, our approach can drastically improve on using fixed hyperparameters previously proposed in the literature. Supplementary materials for this article are available online.  相似文献   

14.
The mixed effects model, in its various forms, is a common model in applied statistics. A useful strategy for fitting this model implements EM-type algorithms by treating the random effects as missing data. Such implementations, however, can be painfully slow when the variances of the random effects are small relative to the residual variance. In this paper, we apply the 'working parameter' approach to derive alternative EM-type implementations for fitting mixed effects models, which we show empirically can be hundreds of times faster than the common EM-type implementations. In our limited simulations, they also compare well with the routines in S-PLUS® and Stata® in terms of both speed and reliability. The central idea of the working parameter approach is to search for efficient data augmentation schemes for implementing the EM algorithm by minimizing the augmented information over the working parameter, and in the mixed effects setting this leads to a transfer of the mixed effects variances into the regression slope parameters. We also describe a variation for computing the restricted maximum likelihood estimate and an adaptive algorithm that takes advantage of both the standard and the alternative EM-type implementations.  相似文献   

15.
We develop and apply an approach to the spatial interpolation of a vector-valued random response field. The Bayesian approach we adopt enables uncertainty about the underlying models to be représentés in expressing the accuracy of the resulting interpolants. The methodology is particularly relevant in environmetrics, where vector-valued responses are only observed at designated sites at successive time points. The theory allows space-time modelling at the second level of the hierarchical prior model so that uncertainty about the model parameters has been fully expressed at the first level. In this way, we avoid unduly optimistic estimates of inferential accuracy. Moreover, the prior model can be upgraded with any available new data, while past data can be used in a systematic way to fit model parameters. The theory is based on the multivariate normal and related joint distributions. Our hierarchical prior models lead to posterior distributions which are robust with respect to the choice of the prior (hyperparameters). We illustrate our theory with an example involving monitoring stations in southern Ontario, where monthly average levels of ozone, sulphate, and nitrate are available and between-station response triplets are interpolated. In this example we use a recently developed method for interpolating spatial correlation fields.  相似文献   

16.
Spatio-temporal processes are often high-dimensional, exhibiting complicated variability across space and time. Traditional state-space model approaches to such processes in the presence of uncertain data have been shown to be useful. However, estimation of state-space models in this context is often problematic since parameter vectors and matrices are of high dimension and can have complicated dependence structures. We propose a spatio-temporal dynamic model formulation with parameter matrices restricted based on prior scientific knowledge and/or common spatial models. Estimation is carried out via the expectation–maximization (EM) algorithm or general EM algorithm. Several parameterization strategies are proposed and analytical or computational closed form EM update equations are derived for each. We apply the methodology to a model based on an advection–diffusion partial differential equation in a simulation study and also to a dimension-reduced model for a Palmer Drought Severity Index (PDSI) data set.  相似文献   

17.
We consider a general class of prior distributions for nonparametric Bayesian estimation which uses finite random series with a random number of terms. A prior is constructed through distributions on the number of basis functions and the associated coefficients. We derive a general result on adaptive posterior contraction rates for all smoothness levels of the target function in the true model by constructing an appropriate ‘sieve’ and applying the general theory of posterior contraction rates. We apply this general result on several statistical problems such as density estimation, various nonparametric regressions, classification, spectral density estimation and functional regression. The prior can be viewed as an alternative to the commonly used Gaussian process prior, but properties of the posterior distribution can be analysed by relatively simpler techniques. An interesting approximation property of B‐spline basis expansion established in this paper allows a canonical choice of prior on coefficients in a random series and allows a simple computational approach without using Markov chain Monte Carlo methods. A simulation study is conducted to show that the accuracy of the Bayesian estimators based on the random series prior and the Gaussian process prior are comparable. We apply the method on Tecator data using functional regression models.  相似文献   

18.
In this article, we consider a new regression model for counting processes under a proportional hazards assumption. This model is motivated by the need of understanding the evolution of the booking process of a railway company. The main novelty of the approach consists in assuming that the baseline hazard function is piecewise constant, with unknown times of jump (these times of jump are estimated from the data as model parameters). Hence, the parameters of the model can be separated into two different types: parameters that measure the influence of the covariates, and parameters from a multiple change-point model for the baseline. Cox??s semiparametric regression can be seen as a limit case of our model. We develop an iterative procedure to estimate the different parameters, and a test procedure that allows to perform change-point detection in the baseline. Our technique is supported by simulation studies and a real data analysis, which show that our model can be a reasonable alternative to Cox??s regression model, particularly in the presence of tied event times.  相似文献   

19.
Summary.  We present models for the combined analysis of evidence from randomized controlled trials categorized as being at either low or high risk of bias due to a flaw in their conduct. We formulate a bias model that incorporates between-study and between-meta-analysis heterogeneity in bias, and uncertainty in overall mean bias. We obtain algebraic expressions for the posterior distribution of the bias-adjusted treatment effect, which provide limiting values for the information that can be obtained from studies at high risk of bias. The parameters of the bias model can be estimated from collections of previously published meta-analyses. We explore alternative models for such data, and alternative methods for introducing prior information on the bias parameters into a new meta-analysis. Results from an illustrative example show that the bias-adjusted treatment effect estimates are sensitive to the way in which the meta-epidemiological data are modelled, but that using point estimates for bias parameters provides an adequate approximation to using a full joint prior distribution. A sensitivity analysis shows that the gain in precision from including studies at high risk of bias is likely to be low, however numerous or large their size, and that little is gained by incorporating such studies, unless the information from studies at low risk of bias is limited. We discuss approaches that might increase the value of including studies at high risk of bias, and the acceptability of the methods in the evaluation of health care interventions.  相似文献   

20.
The presence of immune elements (generating a fraction of cure) in survival data is common. These cases are usually modeled by the standard mixture model. Here, we use an alternative approach based on defective distributions. Defective distributions are characterized by having density functions that integrate to values less than \(1\), when the domain of their parameters is different from the usual one. We use the Marshall–Olkin class of distributions to generalize two existing defective distributions, therefore generating two new defective distributions. We illustrate the distributions using three real data sets.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号