首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In this paper, we first introduces a tree model without degree boundedness restriction namely generalized controlled tree T, which is an extension of some known tree models, such as homogeneous tree model, uniformly bounded degree tree model, controlled tree model, etc. Then some limit properties including strong law of large numbers for generalized controlled tree-indexed non homogeneous Markov chain are obtained. Finally, we establish some entropy density properties, monotonicity of conditional entropy, and entropy properties for generalized controlled tree-indexed Markov chains.  相似文献   

2.
This article is devoted to the strong law of large numbers and the entropy ergodic theorem for non homogeneous M-bifurcating Markov chains indexed by a M-branch Cayley tree, which generalizes the relevant results of tree-indexed nonhomogeneous bifurcating Markov chains. Meanwhile, our proof is quite different from the traditional method.  相似文献   

3.
4.
5.
ABSTRACT

In this article, we study a class of small deviation theorems for the random variables associated with mth-order asymptotic circular Markov chains. First, the definition of mth-order asymptotic circular Markov chain is introduced, then by applying the known results of the limit theorem for mth-order non homogeneous Markov chain, the small deviation theorem on the frequencies of occurrence of states for mth-order asymptotic circular Markov chains is established. Next, the strong law of large numbers and asymptotic equipartition property for this Markov chains are obtained. Finally, some results of mth-order nonhomogeneous Markov chains are given.  相似文献   

6.
In the class of discrete time Markovian processes, two models are widely used, the Markov chain and the hidden Markov model. A major difference between these two models lies in the relation between successive outputs of the observed variable. In a visible Markov chain, these are directly correlated while in hidden models they are not. However, in some situations it is possible to observe both a hidden Markov chain and a direct relation between successive observed outputs. Unfortunately, the use of either a visible or a hidden model implies the suppression of one of these hypothesis. This paper prsents a Markovian model under random environment called the Double Chain Markov Model which takes into account the maijn features of both visible and hidden models. Its main purpose is the modeling of non-homogeneous time-series. It is very flexible and can be estimated with traditional methods. The model is applied on a sequence of wind speeds and it appears to model data more successfully than both the usual Markov chains and hidden Markov models.  相似文献   

7.
Consider longitudinal networks whose edges turn on and off according to a discrete-time Markov chain with exponential-family transition probabilities. We characterize when their joint distributions are also exponential families with the same parameter, improving data reduction. Further we show that the permutation-uniform subclass of these chains permit interpretation as an independent, identically distributed sequence on the same state space. We then apply these ideas to temporal exponential random graph models, for which permutation uniformity is well suited, and discuss mean-parameter convergence, dyadic independence, and exchangeability. Our framework facilitates our introducing a new network model; simplifies analysis of some network and autoregressive models from the literature, including by permitting closed-form expressions for maximum likelihood estimates for some models; and facilitates applying standard tools to longitudinal-network Markov chains from either asymptotics or single-observation exponential random graph models.  相似文献   

8.
We first introduce fuzzy finite Markov chains and present some of their fundamental properties based on possibility theory. We also bring in a way to convert fuzzy Markov chains to classic Markov chains. In addition, we simulate fuzzy Markov chain using different sizes. It is observed that the most of fuzzy Markov chains not only do have an ergodic behavior, but also they are periodic. Finally, using Halton quasi-random sequence we generate some fuzzy Markov chains, which are compared with the ones generated by the RAND function of MATLAB. Therefore, we improve the periodicity behavior of fuzzy Markov chains.  相似文献   

9.
This paper is concerned with the asymptotic property of delayed sums for rowwise conditionally independent stochastic arrays. The main technique of the proofing is to construct non negative random variables with one parameter and to apply the Borel–Cantelli lemma to obtaining almost everywhere convergence. The relevant results for non homogeneous Markov chains indexed by a tree are extended.  相似文献   

10.
We consider a bootstrap method for Markov chains where the original chain is broken into a (random) number of cycles based on an atom (regeneration point) and the bootstrap scheme resamples from these cycles. We investigate the asymptotic accuracy of this method for the case of a sum (or a sample mean) related to the Markov chain. Under some standard moment conditions, the method is shown to be at least as good as the normal approximation, and better (second-order accurate) in the case of nonlattice summands. We give three examples to illustrate the applicability of our results.  相似文献   

11.
In this article, we will study the strong laws of large numbers and asymptotic equipartition property (AEP) for mth-order asymptotic odd–even Markov chains indexed by an m-rooted Cayley tree. First, the definition of mth-order asymptotic odd–even Markov chains indexed by an m-rooted Cayley tree is introduced, then the strong limit theorem for this Markov chains is established. Next, the strong laws of large numbers for the frequencies of ordered couple of states for mth-order asymptotic odd–even Markov chains indexed by an m-rooted Cayley tree are obtained. Finally, we prove the AEP for this Markov chains.  相似文献   

12.
We define a notion of de-initializing Markov chains. We prove that to analyse convergence of Markov chains to stationarity, it suffices to analyse convergence of a de-initializing chain. Applications are given to Markov chain Monte Carlo algorithms and to convergence diagnostics.  相似文献   

13.
Many tasks in image analysis can be formulated as problems of discrimination or, generally, of pattern recognition. A pattern-recognition system is normally considered to comprise two processing stages: the feature selection and extraction stage, which attempts to reduce the dimensionality of the pattern to be classified, and the classification stage, the purpose of which is to assign the pattern into its perceptually meaningful category. This paper gives an overview of the various approaches to designing statistical pattern recognition schemes. The problem of feature selection and extraction is introduced. The discussion then focuses on statistical decision theoretic rules and their implementation. Both parametric and non-parametric classification methods are covered. The emphasis then switches to decision making in context. Two basic formulations of contextual pattern classification are put forward, and the various methods developed from these two formulations are reviewed. These include the method of hidden Markov chains, the Markov random field approach, Markov meshes, and probabilistic and discrete relaxation.  相似文献   

14.
Many tasks in image analysis can be formulated as problems of discrimination or, generally, of pattern recognition. A pattern-recognition system is normally considered to comprise two processing stages: the feature selection and extraction stage, which attempts to reduce the dimensionality of the pattern to be classified, and the classification stage, the purpose of which is to assign the pattern into its perceptually meaningful category. This paper gives an overview of the various approaches to designing statistical pattern recognition schemes. The problem of feature selection and extraction is introduced. The discussion then focuses on statistical decision theoretic rules and their implementation. Both parametric and non-parametric classification methods are covered. The emphasis then switches to decision making in context. Two basic formulations of contextual pattern classification are put forward, and the various methods developed from these two formulations are reviewed. These include the method of hidden Markov chains, the Markov random field approach, Markov meshes, and probabilistic and discrete relaxation.  相似文献   

15.
Park  Joonha  Atchadé  Yves 《Statistics and Computing》2020,30(5):1325-1345

We explore a general framework in Markov chain Monte Carlo (MCMC) sampling where sequential proposals are tried as a candidate for the next state of the Markov chain. This sequential-proposal framework can be applied to various existing MCMC methods, including Metropolis–Hastings algorithms using random proposals and methods that use deterministic proposals such as Hamiltonian Monte Carlo (HMC) or the bouncy particle sampler. Sequential-proposal MCMC methods construct the same Markov chains as those constructed by the delayed rejection method under certain circumstances. In the context of HMC, the sequential-proposal approach has been proposed as extra chance generalized hybrid Monte Carlo (XCGHMC). We develop two novel methods in which the trajectories leading to proposals in HMC are automatically tuned to avoid doubling back, as in the No-U-Turn sampler (NUTS). The numerical efficiency of these new methods compare favorably to the NUTS. We additionally show that the sequential-proposal bouncy particle sampler enables the constructed Markov chain to pass through regions of low target density and thus facilitates better mixing of the chain when the target density is multimodal.

  相似文献   

16.
Markov chains are used to model binary urine test results. Taking advantage of the transition mechanism of Markov chains, missing observations can be incorporated in the analysis. Maximum likelihood estimates of transition probabilities are computed. Formulas for empirical Bayes procedures are given.  相似文献   

17.
《随机性模型》2013,29(2):109-120
This paper is concerned with ergodic Markov chains satisfying a sequence of drift conditions that imply (f, r)- regularity of the chain, by which subgeometric ergodicity is ensured. An interesting exact trade-off result between the exponents of f and r for a special class of state space models by Tuominen and Tweedie (1994) is extended here from integers to real numbers for general Markov chains satisfying these drift conditions simultaneously as well as standard requirements for ergodic Markov chains. In Section 3, we will illustrate by the state space models that the utilization of these drift conditions is a very convenient way to show subgeometric ergodicity of Markov chains including the exact trade-off between the exponents of f and r.  相似文献   

18.
This paper considers the computation of the conditional stationary distribution in Markov chains of level-dependent M/G/1-type, given that the level is not greater than a predefined threshold. This problem has been studied recently and a computational algorithm is proposed under the assumption that matrices representing downward jumps are nonsingular. We first show that this assumption can be eliminated in a general setting of Markov chains of level-dependent G/G/1-type. Next we develop a computational algorithm for the conditional stationary distribution in Markov chains of level-dependent M/G/1-type, by modifying the above-mentioned algorithm slightly. In principle, our algorithm is applicable to any Markov chain of level-dependent M/G/1-type, if the Markov chain is irreducible and positive-recurrent. Furthermore, as an input to the algorithm, we can set an error bound for the computed conditional distribution, which is a notable feature of our algorithm. Some numerical examples are also provided.  相似文献   

19.
In this paper, we study the strong law of large numbers for the generalized sample relative entropy of non homogeneous Markov chains taking values from a finite state space. First, we introduce the definitions of generalized sample relative entropy and generalized sample relative entropy rate. Then, using a strong limit theorem for the delayed sums of the functions of two variables and a strong law of large numbers for non homogeneous Markov chains, we obtain the strong law of large numbers for the generalized sample relative entropy of non homogeneous Markov chains. As corollaries, we obtain some important results.  相似文献   

20.
Hai-Bo Yu 《随机性模型》2017,33(4):551-571
ABSTRACT

Motivated by various applications in queueing theory, this article is devoted to the stochastic monotonicity and comparability of Markov chains with block-monotone transition matrices. First, we introduce the notion of block-increasing convex order for probability vectors, and characterize the block-monotone matrices in the sense of the block-increasing order and block-increasing convex order. Second, we characterize the Markov chain with general transition matrix by martingale and provide a stochastic comparison of two block-monotone Markov chains under the two block-monotone orders. Third, the stochastic comparison results for the Markov chains corresponding to the discrete-time GI/G/1 queue with different service distributions under the two block-monotone orders are given, and the lower bound and upper bound of the Markov chain corresponding to the discrete-time GI/G/1 queue in the sense of the block-increasing convex order are found.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号