首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 750 毫秒
1.
We employ a hierarchical Bayesian method with exchangeable prior distributions to estimate and compare similar nondecreasing response curves. A Dirichlet process distribution is assigned to each of the response curves as a first stage prior. A second stage prior is then used to model the hyperparameters. We define parameters which will be used to compare the response curves. A Markov chain Monte Carlo method is applied to compute the resulting Bayesian estimates. To illustrate the methodology, we re-examine data from an experiment designed to test whether experimenter observation influences the ultimatum game. A major restriction of the original analysis was the shape constraint that the present technique allows us to greatly relax. We also consider independent priors and use Bayes factors to compare various models.  相似文献   

2.
Summary.  We develop Markov chain Monte Carlo methodology for Bayesian inference for non-Gaussian Ornstein–Uhlenbeck stochastic volatility processes. The approach introduced involves expressing the unobserved stochastic volatility process in terms of a suitable marked Poisson process. We introduce two specific classes of Metropolis–Hastings algorithms which correspond to different ways of jointly parameterizing the marked point process and the model parameters. The performance of the methods is investigated for different types of simulated data. The approach is extended to consider the case where the volatility process is expressed as a superposition of Ornstein–Uhlenbeck processes. We apply our methodology to the US dollar–Deutschmark exchange rate.  相似文献   

3.
We develop a new methodology for determining the location and dynamics of brain activity from combined magnetoencephalography (MEG) and electroencephalography (EEG) data. The resulting inverse problem is ill‐posed and is one of the most difficult problems in neuroimaging data analysis. In our development we propose a solution that combines the data from three different modalities, magnetic resonance imaging (MRI), MEG and EEG, together. We propose a new Bayesian spatial finite mixture model that builds on the mesostate‐space model developed by Daunizeau & Friston [Daunizeau and Friston, NeuroImage 2007; 38, 67–81]. Our new model incorporates two major extensions: (i) We combine EEG and MEG data together and formulate a joint model for dealing with the two modalities simultaneously; (ii) we incorporate the Potts model to represent the spatial dependence in an allocation process that partitions the cortical surface into a small number of latent states termed mesostates. The cortical surface is obtained from MRI. We formulate the new spatiotemporal model and derive an efficient procedure for simultaneous point estimation and model selection based on the iterated conditional modes algorithm combined with local polynomial smoothing. The proposed method results in a novel estimator for the number of mixture components and is able to select active brain regions, which correspond to active variables in a high‐dimensional dynamic linear model. The methodology is investigated using synthetic data and simulation studies and then demonstrated on an application examining the neural response to the perception of scrambled faces. R software implementing the methodology along with several sample datasets are available at the following GitHub repository https://github.com/v2south/PottsMix . The Canadian Journal of Statistics 47: 688–711; 2019 © 2019 Statistical Society of Canada  相似文献   

4.
Recently, several methodologies to perform geostatistical analysis of functional data have been proposed. All of them assume that the spatial functional process considered is stationary. However, in practice, we often have nonstationary functional data because there exists an explicit spatial trend in the mean. Here, we propose a methodology to extend kriging predictors for functional data to the case where the mean function is not constant through the region of interest. We consider an approach based on the classical residual kriging method used in univariate geostatistics. We propose a three steps procedure. Initially, a functional regression model is used to detrend the mean. Then we apply kriging methods for functional data to the regression residuals to predict a residual curve at a non-data location. Finally, the prediction curve is obtained as the sum of the trend and the residual prediction. We apply the methodology to salinity data corresponding to 21 salinity curves recorded at the Ciénaga Grande de Santa Marta estuary, located in the Caribbean coast of Colombia. A cross-validation analysis was carried out to track the performance of the proposed methodology.  相似文献   

5.
Abstract. We investigate simulation methodology for Bayesian inference in Lévy‐driven stochastic volatility (SV) models. Typically, Bayesian inference from such models is performed using Markov chain Monte Carlo (MCMC); this is often a challenging task. Sequential Monte Carlo (SMC) samplers are methods that can improve over MCMC; however, there are many user‐set parameters to specify. We develop a fully automated SMC algorithm, which substantially improves over the standard MCMC methods in the literature. To illustrate our methodology, we look at a model comprised of a Heston model with an independent, additive, variance gamma process in the returns equation. The driving gamma process can capture the stylized behaviour of many financial time series and a discretized version, fit in a Bayesian manner, has been found to be very useful for modelling equity data. We demonstrate that it is possible to draw exact inference, in the sense of no time‐discretization error, from the Bayesian SV model.  相似文献   

6.
In this paper, we consider the problem of estimating a single changepoint in a parameter‐driven model. The model – an extension of the Poisson regression model – accounts for serial correlation through a latent process incorporated in its mean function. Emphasis is placed on the changepoint characterization with changes in the parameters of the model. The model is fully implemented within the Bayesian framework. We develop a RJMCMC algorithm for parameter estimation and model determination. The algorithm embeds well‐devised Metropolis–Hastings procedures for estimating the missing values of the latent process through data augmentation and the changepoint. The methodology is illustrated using data on monthly counts of claimants collecting wage loss benefit for injuries in the workplace and an analysis of presidential uses of force in the USA.  相似文献   

7.
The most common assumption in geostatistical modeling of malaria is stationarity, that is spatial correlation is a function of the separation vector between locations. However, local factors (environmental or human-related activities) may influence geographical dependence in malaria transmission differently at different locations, introducing non-stationarity. Ignoring this characteristic in malaria spatial modeling may lead to inaccurate estimates of the standard errors for both the covariate effects and the predictions. In this paper, a model based on random Voronoi tessellation that takes into account non-stationarity was developed. In particular, the spatial domain was partitioned into sub-regions (tiles), a stationary spatial process was assumed within each tile and between-tile correlation was taken into account. The number and configuration of the sub-regions are treated as random parameters in the model and inference is made using reversible jump Markov chain Monte Carlo simulation. This methodology was applied to analyze malaria survey data from Mali and to produce a country-level smooth map of malaria risk.  相似文献   

8.
Inverse regression estimation for censored data   总被引:1,自引:0,他引:1  
An inverse regression methodology for assessing predictor performance in the censored data setup is developed along with inference procedures and a computational algorithm. The technique developed here allows for conditioning on the unobserved failure time along with a weighting mechanism that accounts for the censoring. The implementation is nonparametric and computationally fast. This provides an efficient methodological tool that can be used especially in cases where the usual modeling assumptions are not applicable to the data under consideration. It can also be a good diagnostic tool that can be used in the model selection process. We have provided theoretical justification of consistency and asymptotic normality of the methodology. Simulation studies and two data analyses are provided to illustrate the practical utility of the procedure.  相似文献   

9.
We propose a competing risks approach to analyse customer behaviours in freemium products and services. The event of interest is when a customer starts to pay for additional features or functionalities. The observation of such an event may be preempted by an event where the customer quits using the product before paying and consuming the additional features or functionalities. One such freemium service is the online game category. The Fine-Gray regression model was implemented for an online game player activity data to study how covariates affect the paying hazard. Some covariates are hypothesized to have different discrete effects at multiple change points. We extend the model to allow for possible change points in the analysis.  相似文献   

10.
This paper provides alternative methods for fitting symmetry and diagonal-parameters symmetry models to square tables having ordered categories. We demonstrate here the implementation of the class of models discussed in Goodman (1979c) using GEN-MOD in SAS. We also provide procedures for testing hypotheses involving model parameters. The methodology provided here can readily be used to fit the class of models discussed in Lawal and Upton (1995). If desired, composite models can be fitted. Two data sets, the 4 × 4 unaided distance vision of 4746 Japanese students Tomizawa (1985) and the 5 × 5 British social mobility data Glass (1954) are employed to demonstrate the fitting of these models. Results obtained are consistent with those from Goodman (1972, 1979c, 1986) and Tomizawa (1985, 1987).  相似文献   

11.
We consider data generating structures which can be represented as a Markov switching of nonlinear autoregressive model with considering skew-symmetric innovations such that switching between the states is controlled by a hidden Markov chain. We propose semi-parametric estimators for the nonlinear functions of the proposed model based on a maximum likelihood (ML) approach and study sufficient conditions for geometric ergodicity of the process. Also, an Expectation-Maximization type optimization for obtaining the ML estimators are presented. A simulation study and a real world application are also performed to illustrate and evaluate the proposed methodology.  相似文献   

12.
We present a simulation methodology for Bayesian estimation of rate parameters in Markov jump processes arising for example in stochastic kinetic models. To handle the problem of missing components and measurement errors in observed data, we embed the Markov jump process into the framework of a general state space model. We do not use diffusion approximations. Markov chain Monte Carlo and particle filter type algorithms are introduced which allow sampling from the posterior distribution of the rate parameters and the Markov jump process also in data-poor scenarios. The algorithms are illustrated by applying them to rate estimation in a model for prokaryotic auto-regulation and the stochastic Oregonator, respectively.  相似文献   

13.
We propose a methodology to analyse data arising from a curve that, over its domain, switches among J states. We consider a sequence of response variables, where each response y depends on a covariate x according to an unobserved state z. The states form a stochastic process and their possible values are j=1,?…?, J. If z equals j the expected response of y is one of J unknown smooth functions evaluated at x. We call this model a switching nonparametric regression model. We develop an Expectation–Maximisation algorithm to estimate the parameters of the latent state process and the functions corresponding to the J states. We also obtain standard errors for the parameter estimates of the state process. We conduct simulation studies to analyse the frequentist properties of our estimates. We also apply the proposed methodology to the well-known motorcycle dataset treating the data as coming from more than one simulated accident run with unobserved run labels.  相似文献   

14.
We propose a method for the analysis of a spatial point pattern, which is assumed to arise as a set of observations from a spatial nonhomogeneous Poisson process. The spatial point pattern is observed in a bounded region, which, for most applications, is taken to be a rectangle in the space where the process is defined. The method is based on modeling a density function, defined on this bounded region, that is directly related with the intensity function of the Poisson process. We develop a flexible nonparametric mixture model for this density using a bivariate Beta distribution for the mixture kernel and a Dirichlet process prior for the mixing distribution. Using posterior simulation methods, we obtain full inference for the intensity function and any other functional of the process that might be of interest. We discuss applications to problems where inference for clustering in the spatial point pattern is of interest. Moreover, we consider applications of the methodology to extreme value analysis problems. We illustrate the modeling approach with three previously published data sets. Two of the data sets are from forestry and consist of locations of trees. The third data set consists of extremes from the Dow Jones index over a period of 1303 days.  相似文献   

15.
Babies born live under 2,500 g or with a gestational age under 37 weeks are often inadequately developed and have elevated risks of infant mortality, congenital malformations, mental retardation, and other physical and neurological impairments. In this paper, we model birth weight as a first hitting time (FHT) of a birthing boundary in a Wiener process representing fetal development. We associate the parameters of the process and boundary with covariates describing maternal characteristics and the birthing environment using a relatively new regression methodology called threshold regression. Two FHT models for birth weight are developed. One is a mixture model and the other a competing risks model. These models are tested in a case demonstration using a 4%-systematic sample of the more than four million live births in the United States in 2002. An extensive data set for these births was provided by the National Center for Health Statistics. The focus of this paper is on the conceptual framework, models and methodology. A full empirical study is deferred to a later occasion.  相似文献   

16.
In the regression analysis of time series of event counts, it is of interest to account for serial dependence that is likely to be present among such data as well as a nonlinear interaction between the expected event counts and predictors as a function of some underlying variables. We thus develop a Poisson autoregressive varying-coefficient model, which introduces autocorrelation through a latent process and allows regression coefficients to nonparametrically vary as a function of the underlying variables. The nonparametric functions for varying regression coefficients are estimated with data-driven basis selection, thereby avoiding overfitting and adapting to curvature variation. An efficient posterior sampling scheme is devised to analyse the proposed model. The proposed methodology is illustrated using simulated data and daily homicide data in Cali, Colombia.  相似文献   

17.
In this article, we formulate a semiparametric model for counting processes in which the effect of covariates is to transform the time scale for a baseline rate function. We assume an arbitrary dependence structure for the counting process and propose a class of estimating equations for the regression parameters. Asymptotic results for these estimators are derived. In addition, goodness of fit methods for assessing the adequacy of the accelerated rates model are proposed. The finite-sample behavior of the proposed methods is examined in simulation studies, and data from a chronic granulomatous disease study are used to illustrate the methodology.  相似文献   

18.
In this paper, we propose a new methodology for solving stochastic inversion problems through computer experiments, the stochasticity being driven by a functional random variables. This study is motivated by an automotive application. In this context, the simulator code takes a double set of simulation inputs: deterministic control variables and functional uncertain variables. This framework is characterized by two features. The first one is the high computational cost of simulations. The second is that the probability distribution of the functional input is only known through a finite set of realizations. In our context, the inversion problem is formulated by considering the expectation over the functional random variable. We aim at solving this problem by evaluating the model on a design, whose adaptive construction combines the so-called stepwise uncertainty reduction methodology with a strategy for an efficient expectation estimation. Two greedy strategies are introduced to sequentially estimate the expectation over the functional uncertain variable by adaptively selecting curves from the initial set of realizations. Both of these strategies consider functional principal component analysis as a dimensionality reduction technique assuming that the realizations of the functional input are independent realizations of the same continuous stochastic process. The first strategy is based on a greedy approach for functional data-driven quantization, while the second one is linked to the notion of space-filling design. Functional PCA is used as an intermediate step. For each point of the design built in the reduced space, we select the corresponding curve from the sample of available curves, thus guaranteeing the robustness of the procedure to dimension reduction. The whole methodology is illustrated and calibrated on an analytical example. It is then applied on the automotive industrial test case where we aim at identifying the set of control parameters leading to meet the pollutant emission standards of a vehicle.  相似文献   

19.
Deterministic computer simulations are often used as replacement for complex physical experiments. Although less expensive than physical experimentation, computer codes can still be time-consuming to run. An effective strategy for exploring the response surface of the deterministic simulator is the use of an approximation to the computer code, such as a Gaussian process (GP) model, coupled with a sequential sampling strategy for choosing design points that can be used to build the GP model. The ultimate goal of such studies is often the estimation of specific features of interest of the simulator output, such as the maximum, minimum, or a level set (contour). Before approximating such features with the GP model, sufficient runs of the computer simulator must be completed.Sequential designs with an expected improvement (EI) design criterion can yield good estimates of the features with minimal number of runs. The challenge is that the expected improvement function itself is often multimodal and difficult to maximize. We develop branch and bound algorithms for efficiently maximizing the EI function in specific problems, including the simultaneous estimation of a global maximum and minimum, and in the estimation of a contour. These branch and bound algorithms outperform other optimization strategies such as genetic algorithms, and can lead to significantly more accurate estimation of the features of interest.  相似文献   

20.
Abstract. This is probably the first paper which discusses likelihood inference for a random set using a germ‐grain model, where the individual grains are unobservable, edge effects occur and other complications appear. We consider the case where the grains form a disc process modelled by a marked point process, where the germs are the centres and the marks are the associated radii of the discs. We propose to use a recent parametric class of interacting disc process models, where the minimal sufficient statistic depends on various geometric properties of the random set, and the density is specified with respect to a given marked Poisson model (i.e. a Boolean model). We show how edge effects and other complications can be handled by considering a certain conditional likelihood. Our methodology is illustrated by analysing Peter Diggle's heather data set, where we discuss the results of simulation‐based maximum likelihood inference and the effect of specifying different reference Poisson models.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号