首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
We consider a bootstrap method for Markov chains where the original chain is broken into a (random) number of cycles based on an atom (regeneration point) and the bootstrap scheme resamples from these cycles. We investigate the asymptotic accuracy of this method for the case of a sum (or a sample mean) related to the Markov chain. Under some standard moment conditions, the method is shown to be at least as good as the normal approximation, and better (second-order accurate) in the case of nonlattice summands. We give three examples to illustrate the applicability of our results.  相似文献   

2.
The followin paper is dedicated to a special class of stationary Markov chains. The transition probabilities are constructed from bivariate distribution functions of the Morgenstem-Type. These Markov chains are defined by their stationary distribution and a parameter a controlling the correlation between succeeding values of the chain. Relevant properties of the Markov chain are discussed. Some estimations of the parameter a are studied. The maximum likelihood estimator is compared with a simple estimator.  相似文献   

3.
4.
This paper gives the definition of tree-indexed Markov chains in random environment with discrete state space, and then studies some equivalent theorems of tree-indexed Markov chains in random environment. Finally, we give the equivalence on tree-indexed Markov chains in Markov environment and double Markov chains indexed by a tree.  相似文献   

5.
Let Nn be the number of occurrences in n trials of an event governed by a two-state Markov chain (of first order or second order). We obtain the distribution of Nn and apply it to a problem involving literary text.  相似文献   

6.
We derive the Berry-Esséen theorem with optimal convergence rate for U-statistics and von Mises statistics associated with a special class of Markov chains occuring in the theory of dependence with complete connections.  相似文献   

7.
ABSTRACT

In this article, we study a class of small deviation theorems for the random variables associated with mth-order asymptotic circular Markov chains. First, the definition of mth-order asymptotic circular Markov chain is introduced, then by applying the known results of the limit theorem for mth-order non homogeneous Markov chain, the small deviation theorem on the frequencies of occurrence of states for mth-order asymptotic circular Markov chains is established. Next, the strong law of large numbers and asymptotic equipartition property for this Markov chains are obtained. Finally, some results of mth-order nonhomogeneous Markov chains are given.  相似文献   

8.
9.
Summary.  A recent advance in the utility of extreme value techniques has been the characteri- zation of the extremal behaviour of Markov chains. This has enabled the application of extreme value models to series whose temporal dependence is Markovian, subject to a limitation that prevents switching between extremely high and extremely low levels. For many applications this is sufficient, but for others, most notably in the field of finance, it is common to find series in which successive values switch between high and low levels. We term such series Markov chains with tail switching potential, and the scope of this paper is to generalize the previous theory to enable the characterization of the extremal properties of series displaying this type of behaviour. In addition to theoretical developments, a modelling procedure is proposed. A simulation study is made to assess the utility of the model in inferring the extremal dependence structure of autoregressive conditional heteroscedastic processes, which fall within the tail switching Markov family, and generalized autoregressive conditional heteroscedastic processes which do not, being non-Markov in general. Finally, the procedure is applied to model extremal aspects of a financial index extracted from the New York Stock Exchange compendium.  相似文献   

10.
ABSTRACT

The aim of this note is to investigate the concentration properties of unbounded functions of geometrically ergodic Markov chains. We derive concentration properties of centred functions with respect to the square of Lyapunov's function in the drift condition satisfied by the Markov chain. We apply the new exponential inequalities to derive confidence intervals for Markov Chain Monte Carlo algorithms. Quantitative error bounds are provided for the regenerative Metropolis algorithm of [Brockwell and Kadane Identification of regeneration times in MCMC simulation, with application to adaptive schemes. J Comput Graphical Stat. 2005;14(2)].  相似文献   

11.
We propose a new model for multivariate Markov chains of order one or higher on the basis of the mixture transition distribution (MTD) model. We call it the MTD‐Probit. The proposed model presents two attractive features: it is completely free of constraints, thereby facilitating the estimation procedure, and it is more precise at estimating the transition probabilities of a multivariate or higher‐order Markov chain than the standard MTD model.  相似文献   

12.
In this paper, we study the strong law of large numbers for the generalized sample relative entropy of non homogeneous Markov chains taking values from a finite state space. First, we introduce the definitions of generalized sample relative entropy and generalized sample relative entropy rate. Then, using a strong limit theorem for the delayed sums of the functions of two variables and a strong law of large numbers for non homogeneous Markov chains, we obtain the strong law of large numbers for the generalized sample relative entropy of non homogeneous Markov chains. As corollaries, we obtain some important results.  相似文献   

13.
This article is devoted to the strong law of large numbers and the entropy ergodic theorem for non homogeneous M-bifurcating Markov chains indexed by a M-branch Cayley tree, which generalizes the relevant results of tree-indexed nonhomogeneous bifurcating Markov chains. Meanwhile, our proof is quite different from the traditional method.  相似文献   

14.
Over the last decade the use of trans-dimensional sampling algorithms has become endemic in the statistical literature. In spite of their application however, there are few reliable methods to assess whether the underlying Markov chains have reached their stationary distribution. In this article we present a distance-based method for the comparison of trans-dimensional Markov chain sample output for a broad class of models. This diagnostic will simultaneously assess deviations between and within chains. Illustration of the analysis of Markov chain sample-paths is presented in simulated examples and in two common modelling situations: a finite mixture analysis and a change-point problem.  相似文献   

15.
Finite memory sources and variable‐length Markov chains have recently gained popularity in data compression and mining, in particular, for applications in bioinformatics and language modelling. Here, we consider denser data compression and prediction with a family of sparse Bayesian predictive models for Markov chains in finite state spaces. Our approach lumps transition probabilities into classes composed of invariant probabilities, such that the resulting models need not have a hierarchical structure as in context tree‐based approaches. This can lead to a substantially higher rate of data compression, and such non‐hierarchical sparse models can be motivated for instance by data dependence structures existing in the bioinformatics context. We describe a Bayesian inference algorithm for learning sparse Markov models through clustering of transition probabilities. Experiments with DNA sequence and protein data show that our approach is competitive in both prediction and classification when compared with several alternative methods on the basis of variable memory length.  相似文献   

16.
Consider longitudinal networks whose edges turn on and off according to a discrete-time Markov chain with exponential-family transition probabilities. We characterize when their joint distributions are also exponential families with the same parameter, improving data reduction. Further we show that the permutation-uniform subclass of these chains permit interpretation as an independent, identically distributed sequence on the same state space. We then apply these ideas to temporal exponential random graph models, for which permutation uniformity is well suited, and discuss mean-parameter convergence, dyadic independence, and exchangeability. Our framework facilitates our introducing a new network model; simplifies analysis of some network and autoregressive models from the literature, including by permitting closed-form expressions for maximum likelihood estimates for some models; and facilitates applying standard tools to longitudinal-network Markov chains from either asymptotics or single-observation exponential random graph models.  相似文献   

17.
18.
We propose a two-stage algorithm for computing maximum likelihood estimates for a class of spatial models. The algorithm combines Markov chain Monte Carlo methods such as the Metropolis–Hastings–Green algorithm and the Gibbs sampler, and stochastic approximation methods such as the off-line average and adaptive search direction. A new criterion is built into the algorithm so stopping is automatic once the desired precision has been set. Simulation studies and applications to some real data sets have been conducted with three spatial models. We compared the algorithm proposed with a direct application of the classical Robbins–Monro algorithm using Wiebe's wheat data and found that our procedure is at least 15 times faster.  相似文献   

19.
Abstract.  In many spatial and spatial-temporal models, and more generally in models with complex dependencies, it may be too difficult to carry out full maximum-likelihood (ML) analysis. Remedies include the use of pseudo-likelihood (PL) and quasi-likelihood (QL) (also called the composite likelihood). The present paper studies the ML, PL and QL methods for general Markov chain models, partly motivated by the desire to understand the precise behaviour of the PL and QL methods in settings where this can be analysed. We present limiting normality results and compare performances in different settings. For Markov chain models, the PL and QL methods can be seen as maximum penalized likelihood methods. We find that QL is typically preferable to PL, and that it loses very little to ML, while sometimes earning in model robustness. It has also appeal and potential as a modelling tool. Our methods are illustrated for consonant-vowel transitions in poetry and for analysis of DNA sequence evolution-type models.  相似文献   

20.
Hai-Bo Yu 《随机性模型》2017,33(4):551-571
ABSTRACT

Motivated by various applications in queueing theory, this article is devoted to the stochastic monotonicity and comparability of Markov chains with block-monotone transition matrices. First, we introduce the notion of block-increasing convex order for probability vectors, and characterize the block-monotone matrices in the sense of the block-increasing order and block-increasing convex order. Second, we characterize the Markov chain with general transition matrix by martingale and provide a stochastic comparison of two block-monotone Markov chains under the two block-monotone orders. Third, the stochastic comparison results for the Markov chains corresponding to the discrete-time GI/G/1 queue with different service distributions under the two block-monotone orders are given, and the lower bound and upper bound of the Markov chain corresponding to the discrete-time GI/G/1 queue in the sense of the block-increasing convex order are found.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号