首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We will pursue a Bayesian nonparametric approach in the hierarchical mixture modelling of lifetime data in two situations: density estimation, when the distribution is a mixture of parametric densities with a nonparametric mixing measure, and accelerated failure time (AFT) regression modelling, when the same type of mixture is used for the distribution of the error term. The Dirichlet process is a popular choice for the mixing measure, yielding a Dirichlet process mixture model for the error; as an alternative, we also allow the mixing measure to be equal to a normalized inverse-Gaussian prior, built from normalized inverse-Gaussian finite dimensional distributions, as recently proposed in the literature. Markov chain Monte Carlo techniques will be used to estimate the predictive distribution of the survival time, along with the posterior distribution of the regression parameters. A comparison between the two models will be carried out on the grounds of their predictive power and their ability to identify the number of components in a given mixture density.  相似文献   

2.
We analyse MCMC chains focusing on how to find simulation parameters that give good mixing for discrete time, Harris ergodic Markov chains on a general state space X having invariant distribution π. The analysis uses an upper bound for the variance of the probability estimate. For each simulation parameter set, the bound is estimated from an MCMC chain using recurrence intervals. Recurrence intervals are a generalization of recurrence periods for discrete Markov chains. It is easy to compare the mixing properties for different simulation parameters. The paper gives general advice on how to improve the mixing of the MCMC chains and a new methodology for how to find an optimal acceptance rate for the Metropolis-Hastings algorithm. Several examples, both toy examples and large complex ones, illustrate how to apply the methodology in practice. We find that the optimal acceptance rate is smaller than the general recommendation in the literature in some of these examples.  相似文献   

3.
Recurrent event data from a long single realization are widely encountered in point process applications. Modeling and analyzing such data are different from those for independent and identical short sequences, and the development of statistical methods requires careful consideration of the underlying dependence structure of the long single sequence. In this paper, we propose a semiparametric additive rate model for a modulated renewal process, and develop an estimating equation approach for the model parameters. The asymptotic properties of the resulting estimators are established by applying the limit theory for stationary mixing sequences. A block-based bootstrap procedure is presented for the variance estimation. Simulation studies are conducted to assess the finite-sample performance of the proposed estimators. An application to a data set from a cardiovascular mortality study is provided.  相似文献   

4.
A gap in the proof of a non stationary mixingale invariance principle is identified and fixed by introducing a skipped subsampling of a partial sum process and letting the skipped interval vanish asymptotically at an appropriate rate as the sample size increases. The corrected proof produces a mixingale limit theorem in the form of a mixing convergence in law, occurring jointly with the stable convergence in law for the same σ-field relative to which they are stable and mixing. The applicability of established results to a high-frequency estimation of the quadratic variation of financial price process is discussed.  相似文献   

5.
Second-order response surfaces are often fitted to the results of designed experiments, and the canonical form of such surfaces can greatly help both in interpreting the results and in deciding what action to take on the process under study. A mixing process on pastry dough is described in which it is desired to simplify the canonical form to make the control of the process more economical, by basing it on only two of the three factors. We give examples where a simplification is possible with minimal loss of accuracy and where it can be seriously misleading, and we outline the features of the response surface that lead to these two situations. A method of improving the simplification by recalculating the constrained canonical axis is proposed. These methods ensure that the mixing process can be controlled by using only two factors without seriously lowering the quality of the pastry.  相似文献   

6.
Independent component analysis (ICA) is a popular blind source separation technique used in many scientific disciplines. Current ICA approaches have focused on developing efficient algorithms under specific ICA models, such as instantaneous or convolutive mixing conditions, intrinsically assuming temporal independence or autocorrelation of the sources. In practice, the true model is not known and different ICA algorithms can produce very different results. Although it is critical to choose an ICA model, there has not been enough research done on evaluating mixing models and assumptions, and how the associated algorithms may perform under different scenarios. In this paper, we investigate the performance of multiple ICA algorithms under various mixing conditions. We also propose a convolutive ICA algorithm for echoic mixing cases. Our simulation studies show that the performance of ICA algorithms is highly dependent on mixing conditions and temporal independence of the sources. Most instantaneous ICA algorithms fail to separate autocorrelated sources, while convolutive ICA algorithms depend highly on the model specification and approximation accuracy of unmixing filters.  相似文献   

7.
For each of the five Dutch coinage denominations, a transfer-function model is estimated. The output variables are monthly observations of coins in circulation. Two input variables represent transaction flows; all other inputs are step functions, representing the occurrence of interventions. Using the method of cross-correlating the residuals of the individual equations, a multivariate transfer-function model is constructed and estimated. Next Monte Carlo simulation is applied to derive expectations and variances of the yearly addition to the stock of coins until 1996. Our results enlighten on some aspects of a problem situation faced by the Dutch State Mint.  相似文献   

8.
网上拍卖中竞买者出价数据的特征及分析方法研究   总被引:2,自引:1,他引:1  
在传统统计分析中,研究者面对的数值型数据有三种形式,即横截面数据、时间序列数据以及混合数据。这些类型的数据具有离散、等间隔分布、密度均匀等特点,它们是传统的描述性统计和推断性统计中最主要的数据分析对象。然而,从拍卖网站收集到的诸如竞买者出价等数据,却不具备这些特点,对传统统计分析方法提出了挑战。因此需要从数据容量、数据的混合性、不等间隔分布及数据密度等方面,对网上拍卖数据的产生机制进行阐释,对其特征进行分析,并结合实际网上拍卖资料给出分析此类数据的方法和过程。  相似文献   

9.
Histogram density estimator is very intuitive and easy to compute and has been widely adopted. Especially in today's big data environment, people pay more attention to the computational cost and are more willing to choose estimators with less to compute. And so, many scholars have been interested in the various estimates based on the histogram technique. Under strong mixing process, this article studies the uniform strong consistency of histogram density estimator and the convergence rate. Our conditions on the mixing coefficient and the bin width are very mild.  相似文献   

10.
We consider the problem of finding a pair of irregular boxes from a set of n boxes using a balance scale. One irregular box is heavier and the other lighter than a regular box but the total weight of the two irregular boxes is the same as the total weight of two regular boxes. Let N(w) denote the maximum number of boxes w weighings can handle. We give a weighing scheme such that N(2t) ≥3t for t ≥2 and N(2t + 1) ≥ 5.3t-1 for t ≥ 1. The N(2t) result meets the information-theoretic bound and hence is the best possible.  相似文献   

11.
Normal Inverse Gaussian Distributions and Stochastic Volatility Modelling   总被引:4,自引:0,他引:4  
The normal inverse Gaussian distribution is defined as a variance-mean mixture of a normal distribution with the inverse Gaussian as the mixing distribution. The distribution determines an homogeneous Lévy process, and this process is representable through subordination of Brownian motion by the inverse Gaussian process. The canonical, Lévy type, decomposition of the process is determined. As a preparation for developments in the latter part of the paper the connection of the normal inverse Gaussian distribution to the classes of generalized hyperbolic and inverse Gaussian distributions is briefly reviewed. Then a discussion is begun of the potential of the normal inverse Gaussian distribution and Lévy process for modelling and analysing statistical data, with particular reference to extensive sets of observations from turbulence and from finance. These areas of application imply a need for extending the inverse Gaussian Lévy process so as to accommodate certain, frequently observed, temporal dependence structures. Some extensions, of the stochastic volatility type, are constructed via an observation-driven approach to state space modelling. At the end of the paper generalizations to multivariate settings are indicated.  相似文献   

12.
ABSTRACT By studying the deviations between uniform empirical and quantile processes (the so-called Bahadur-Kiefer representations) of a stationary sequence in properly weighted sup-norm metrics, we find a general approach to obtaining weighted results for uniform quantile processes of stationary sequences. Consequently we are able to obtain weak convergence for weighted uniform quantile processes of stationary mixing and associated sequences. Further, by studying the sup-norm distance of a general quantile process from its corresponding uniform quantile process, we find that information at the two end points of the uniform quantile process can be so utilized that this weighted sup-norm distance converges in probability to zero under the so-called Csörgõ-Révész conditions. This enables us to obtain weak convergence for weighted general quantile processes of stationary mixing and associated sequences.  相似文献   

13.
Pseudo-marginal Markov chain Monte Carlo methods for sampling from intractable distributions have gained recent interest and have been theoretically studied in considerable depth. Their main appeal is that they are exact, in the sense that they target marginally the correct invariant distribution. However, the pseudo-marginal Markov chain can exhibit poor mixing and slow convergence towards its target. As an alternative, a subtly different Markov chain can be simulated, where better mixing is possible but the exactness property is sacrificed. This is the noisy algorithm, initially conceptualised as Monte Carlo within Metropolis, which has also been studied but to a lesser extent. The present article provides a further characterisation of the noisy algorithm, with a focus on fundamental stability properties like positive recurrence and geometric ergodicity. Sufficient conditions for inheriting geometric ergodicity from a standard Metropolis–Hastings chain are given, as well as convergence of the invariant distribution towards the true target distribution.  相似文献   

14.
Many epidemic models approximate social contact behavior by assuming random mixing within mixing groups (e.g., homes, schools, and workplaces). The effect of more realistic social network structure on estimates of epidemic parameters is an open area of exploration. We develop a detailed statistical model to estimate the social contact network within a high school using friendship network data and a survey of contact behavior. Our contact network model includes classroom structure, longer durations of contacts to friends than non-friends and more frequent contacts with friends, based on reports in the contact survey. We performed simulation studies to explore which network structures are relevant to influenza transmission. These studies yield two key findings. First, we found that the friendship network structure important to the transmission process can be adequately represented by a dyad-independent exponential random graph model (ERGM). This means that individual-level sampled data is sufficient to characterize the entire friendship network. Second, we found that contact behavior was adequately represented by a static rather than dynamic contact network. We then compare a targeted antiviral prophylaxis intervention strategy and a grade closure intervention strategy under random mixing and network-based mixing. We find that random mixing overestimates the effect of targeted antiviral prophylaxis on the probability of an epidemic when the probability of transmission in 10 minutes of contact is less than 0.004 and underestimates it when this transmission probability is greater than 0.004. We found the same pattern for the final size of an epidemic, with a threshold transmission probability of 0.005. We also find random mixing overestimates the effect of a grade closure intervention on the probability of an epidemic and final size for all transmission probabilities. Our findings have implications for policy recommendations based on models assuming random mixing, and can inform further development of network-based models.  相似文献   

15.
We present a Bayesian method of ion channel analysis and apply it to a simulated data set. An alternating renewal process prior is assigned to the signal, and an autoregressive process is fitted to the noise. After choosing model hyperconstants to yield 'uninformative' priors on the parameters, the joint posterior distribution is computed by using the reversible jump Markov chain Monte Carlo method. A novel form of simulated tempering is used to improve the mixing of the original sampler.  相似文献   

16.
Variance dispersion graphs have become a popular tool in aiding the choice of a response surface design. Often differences in response from some particular point, such as the expected position of the optimum or standard operating conditions, are more important than the response itself. We describe two examples from food technology. In the first, an experiment was conducted to find the levels of three factors which optimized the yield of valuable products enzymatically synthesized from sugars and to discover how the yield changed as the levels of the factors were changed from the optimum. In the second example, an experiment was conducted on a mixing process for pastry dough to discover how three factors affected a number of properties of the pastry, with a view to using these factors to control the process. We introduce the difference variance dispersion graph (DVDG) to help in the choice of a design in these circumstances. The DVDG for blocked designs is developed and the examples are used to show how the DVDG can be used in practice. In both examples a design was chosen by using the DVDG, as well as other properties, and the experiments were conducted and produced results that were useful to the experimenters. In both cases the conclusions were drawn partly by comparing responses at different points on the response surface.  相似文献   

17.
We propose a method for the analysis of a spatial point pattern, which is assumed to arise as a set of observations from a spatial nonhomogeneous Poisson process. The spatial point pattern is observed in a bounded region, which, for most applications, is taken to be a rectangle in the space where the process is defined. The method is based on modeling a density function, defined on this bounded region, that is directly related with the intensity function of the Poisson process. We develop a flexible nonparametric mixture model for this density using a bivariate Beta distribution for the mixture kernel and a Dirichlet process prior for the mixing distribution. Using posterior simulation methods, we obtain full inference for the intensity function and any other functional of the process that might be of interest. We discuss applications to problems where inference for clustering in the spatial point pattern is of interest. Moreover, we consider applications of the methodology to extreme value analysis problems. We illustrate the modeling approach with three previously published data sets. Two of the data sets are from forestry and consist of locations of trees. The third data set consists of extremes from the Dow Jones index over a period of 1303 days.  相似文献   

18.
ABSTRACT

The purpose of this study is to approximate and identify infinite scale mixtures of normals, SMN. A new method for approximating any infinite SMN with a known mixing measure by a finite SMN is presented. In the new method, the modulus of continuity of the normal family as a function of the scale is used to discretize the mixing measure. This method will be used to approximate univariate and multivariate SMN with mean 0. In the multivariate case, two different methods are used to approximate the infinite SMN. Several results related to SMN are proved and other known ones are presented. For example, SMN are characterized by their corresponding Laplace transforms.  相似文献   

19.
Particle MCMC involves using a particle filter within an MCMC algorithm. For inference of a model which involves an unobserved stochastic process, the standard implementation uses the particle filter to propose new values for the stochastic process, and MCMC moves to propose new values for the parameters. We show how particle MCMC can be generalised beyond this. Our key idea is to introduce new latent variables. We then use the MCMC moves to update the latent variables, and the particle filter to propose new values for the parameters and stochastic process given the latent variables. A generic way of defining these latent variables is to model them as pseudo-observations of the parameters or of the stochastic process. By choosing the amount of information these latent variables have about the parameters and the stochastic process we can often improve the mixing of the particle MCMC algorithm by trading off the Monte Carlo error of the particle filter and the mixing of the MCMC moves. We show that using pseudo-observations within particle MCMC can improve its efficiency in certain scenarios: dealing with initialisation problems of the particle filter; speeding up the mixing of particle Gibbs when there is strong dependence between the parameters and the stochastic process; and enabling further MCMC steps to be used within the particle filter.  相似文献   

20.
A central limit theorem is provided for the least squares estimates of the autoregressive parameters in an ARIMA process with strong mixing moving average part.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号