首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 755 毫秒
1.
We first introduce fuzzy finite Markov chains and present some of their fundamental properties based on possibility theory. We also bring in a way to convert fuzzy Markov chains to classic Markov chains. In addition, we simulate fuzzy Markov chain using different sizes. It is observed that the most of fuzzy Markov chains not only do have an ergodic behavior, but also they are periodic. Finally, using Halton quasi-random sequence we generate some fuzzy Markov chains, which are compared with the ones generated by the RAND function of MATLAB. Therefore, we improve the periodicity behavior of fuzzy Markov chains.  相似文献   

2.
This paper gives the definition of tree-indexed Markov chains in random environment with discrete state space, and then studies some equivalent theorems of tree-indexed Markov chains in random environment. Finally, we give the equivalence on tree-indexed Markov chains in Markov environment and double Markov chains indexed by a tree.  相似文献   

3.
In this article, we are going to study the strong laws of large numbers for countable non homogeneous hidden Markov models. First, we introduce the notion of countable non homogeneous hidden Markov models. Then, we obtain some properties for those Markov models. Finally, we establish two strong laws of large numbers for countable non homogeneous hidden Markov models. As corollaries, we obtain some known results of strong laws of large numbers for finite non homogeneous Markov chains.  相似文献   

4.
Alternative Markov Properties for Chain Graphs   总被引:1,自引:0,他引:1  
Graphical Markov models use graphs to represent possible dependences among statistical variables. Lauritzen, Wermuth, and Frydenberg (LWF) introduced a Markov property for chain graphs (CG): graphs that can be used to represent both structural and associative dependences simultaneously and that include both undirected graphs (UG) and acyclic directed graphs (ADG) as special cases. Here an alternative Markov property (AMP) for CGs is introduced and shown to be the Markov property satisfied by a block-recursive linear system with multivariate normal errors. This model can be decomposed into a collection of conditional normal models, each of which combines the features of multivariate linear regression models and covariance selection models, facilitating the estimation of its parameters. In the general case, necessary and sufficient conditions are given for the equivalence of the LWF and AMP Markov properties of a CG, for the AMP Markov equivalence of two CGs, for the AMP Markov equivalence of a CG to some ADG or decomposable UG, and for other equivalences. For CGs, in some ways the AMP property is a more direct extension of the ADG Markov property than is the LWF property.  相似文献   

5.
We consider the problem of estimating the rate matrix governing a finite-state Markov jump process given a number of fragmented time series. We propose to concatenate the observed series and to employ the emerging non-Markov process for estimation. We describe the bias arising if standard methods for Markov processes are used for the concatenated process, and provide a post-processing method to correct for this bias. This method applies to discrete-time Markov chains and to more general models based on Markov jump processes where the underlying state process is not observed directly. This is demonstrated in detail for a Markov switching model. We provide applications to simulated time series and to financial market data, where estimators resulting from maximum likelihood methods and Markov chain Monte Carlo sampling are improved using the presented correction.  相似文献   

6.
We define a notion of de-initializing Markov chains. We prove that to analyse convergence of Markov chains to stationarity, it suffices to analyse convergence of a de-initializing chain. Applications are given to Markov chain Monte Carlo algorithms and to convergence diagnostics.  相似文献   

7.
8.
9.
In this article, a stock-forecasting model is developed to analyze a company's stock price variation related to the Taiwanese company HTC. The main difference to previous articles is that this study uses the data of the HTC in recent ten years to build a Markov transition matrix. Instead of trying to predict the stock price variation through the traditional approach to the HTC stock problem, we integrate two types of Markov chain that are used in different ways. One is a regular Markov chain, and the other is an absorbing Markov chain. Through a regular Markov chain, we can obtain important information such as what happens in the long run or whether the distribution of the states tends to stabilize over time in an efficient way. Next, we used an artificial variable technique to create an absorbing Markov chain. Thus, we used an absorbing Markov chain to provide information about the period between the increases before arriving at the decreasing state of the HTC stock. We provide investors with information on how long the HTC stock will keep increasing before its price begins to fall, which is extremely important information to them.  相似文献   

10.
In this article, we introduce a two-state homogeneous Markov chain and define a geometric distribution related to this Markov chain. We define also the negative binomial distribution similar to the classical case and call it NB related to interrupted Markov chain. The new binomial distribution is related to the interrupted Markov chain. Some characterization properties of the geometric distributions are given. Recursion formulas and probability mass functions for the NB distribution and the new binomial distribution are derived.  相似文献   

11.
In the class of discrete time Markovian processes, two models are widely used, the Markov chain and the hidden Markov model. A major difference between these two models lies in the relation between successive outputs of the observed variable. In a visible Markov chain, these are directly correlated while in hidden models they are not. However, in some situations it is possible to observe both a hidden Markov chain and a direct relation between successive observed outputs. Unfortunately, the use of either a visible or a hidden model implies the suppression of one of these hypothesis. This paper prsents a Markovian model under random environment called the Double Chain Markov Model which takes into account the maijn features of both visible and hidden models. Its main purpose is the modeling of non-homogeneous time-series. It is very flexible and can be estimated with traditional methods. The model is applied on a sequence of wind speeds and it appears to model data more successfully than both the usual Markov chains and hidden Markov models.  相似文献   

12.
Risk-adjusted CUSUM schemes are designed to monitor the number of adverse outcomes following a medical procedure. An approximation of the average run length (ARL), which is the usual performance measure for a risk-adjusted CUSUM, may be found using its Markov property. We compare two methods of computing transition probability matrices where the risk model classifies patient populations into discrete, finite levels of risk. For the first method, a process of scaling and rounding off concentrates probability in the center of the Markov states, which are non overlapping sub-intervals of the CUSUM decision interval, and, for the second, a smoothing process spreads probability uniformly across the Markov states. Examples of risk-adjusted CUSUM schemes are used to show, if rounding is used to calculate transition probabilities, the values of ARLs estimated using the Markov property vary erratically as the number of Markov states vary and, on occasion, fail to converge for mesh sizes up to 3,000. On the other hand, if smoothing is used, the approximate ARL values remain stable as the number of Markov states vary. The smoothing technique gave good estimates of the ARL where there were less than 1,000 Markov states.  相似文献   

13.
We consider conditional exact tests of factor effects in design of experiments for discrete response variables. Similarly to the analysis of contingency tables, Markov chain Monte Carlo methods can be used to perform exact tests, especially when large-sample approximations of the null distributions are poor and the enumeration of the conditional sample space is infeasible. In order to construct a connected Markov chain over the appropriate sample space, one approach is to compute a Markov basis. Theoretically, a Markov basis can be characterized as a generator of a well-specified toric ideal in a polynomial ring and is computed by computational algebraic software. However, the computation of a Markov basis sometimes becomes infeasible, even for problems of moderate sizes. In the present article, we obtain the closed-form expression of minimal Markov bases for the main effect models of 2p ? 1 fractional factorial designs of resolution p.  相似文献   

14.
Markov Sampling     
A discrete parameter stochastic process is observed at epochs of visits to a specified state in an independent two-state Markov chain. It is established that the family of finite dimensional distributions of the process derived in this way, referred to as Markov sampling, uniquely determines the stochastic structure of the original process. Using this identifiability, it is shown that if the derived process is Markov, then the original process is also Markov and if the derived process is strictly stationary then so is the original.  相似文献   

15.
Graphical Markov models use undirected graphs (UDGs), acyclic directed graphs (ADGs), or (mixed) chain graphs to represent possible dependencies among random variables in a multivariate distribution. Whereas a UDG is uniquely determined by its associated Markov model, this is not true for ADGs or for general chain graphs (which include both UDGs and ADGs as special cases). This paper addresses three questions regarding the equivalence of graphical Markov models: when is a given chain graph Markov equivalent (1) to some UDG? (2) to some (at least one) ADG? (3) to some decomposable UDG? The answers are obtained by means of an extension of Frydenberg’s (1990) elegant graph-theoretic characterization of the Markov equivalence of chain graphs.  相似文献   

16.
When using an auxiliary Markov chain to compute the distribution of a pattern statistic, the computational complexity is directly related to the number of Markov chain states. Theory related to minimal deterministic finite automata have been applied to large state spaces to reduce the number of Markov chain states so that only a minimal set remains. In this paper, a characterization of equivalent states is given so that extraneous states are deleted during the process of forming the state space, improving computational efficiency. The theory extends the applicability of Markov chain based methods for computing the distribution of pattern statistics.  相似文献   

17.
Bayesian shrinkage methods have generated a lot of interest in recent years, especially in the context of high‐dimensional linear regression. In recent work, a Bayesian shrinkage approach using generalized double Pareto priors has been proposed. Several useful properties of this approach, including the derivation of a tractable three‐block Gibbs sampler to sample from the resulting posterior density, have been established. We show that the Markov operator corresponding to this three‐block Gibbs sampler is not Hilbert–Schmidt. We propose a simpler two‐block Gibbs sampler and show that the corresponding Markov operator is trace class (and hence Hilbert–Schmidt). Establishing the trace class property for the proposed two‐block Gibbs sampler has several useful consequences. Firstly, it implies that the corresponding Markov chain is geometrically ergodic, thereby implying the existence of a Markov chain central limit theorem, which in turn enables computation of asymptotic standard errors for Markov chain‐based estimates of posterior quantities. Secondly, because the proposed Gibbs sampler uses two blocks, standard recipes in the literature can be used to construct a sandwich Markov chain (by inserting an appropriate extra step) to gain further efficiency and to achieve faster convergence. The trace class property for the two‐block sampler implies that the corresponding sandwich Markov chain is also trace class and thereby geometrically ergodic. Finally, it also guarantees that all eigenvalues of the sandwich chain are dominated by the corresponding eigenvalues of the Gibbs sampling chain (with at least one strict domination). Our results demonstrate that a minor change in the structure of a Markov chain can lead to fundamental changes in its theoretical properties. We illustrate the improvement in efficiency resulting from our proposed Markov chains using simulated and real examples.  相似文献   

18.
Markov networks are popular models for discrete multivariate systems where the dependence structure of the variables is specified by an undirected graph. To allow for more expressive dependence structures, several generalizations of Markov networks have been proposed. Here, we consider the class of contextual Markov networks which takes into account possible context‐specific independences among pairs of variables. Structure learning of contextual Markov networks is very challenging due to the extremely large number of possible structures. One of the main challenges has been to design a score, by which a structure can be assessed in terms of model fit related to complexity, without assuming chordality. Here, we introduce the marginal pseudo‐likelihood as an analytically tractable criterion for general contextual Markov networks. Our criterion is shown to yield a consistent structure estimator. Experiments demonstrate the favourable properties of our method in terms of predictive accuracy of the inferred models.  相似文献   

19.
We address the issue of order identification for hidden Markov models with Poisson and Gaussian emissions. We prove information-theoretic BIC-like mixture inequalities in the spirit of Finesso [1991. Consistent estimation of the order for Markov and hidden Markov chains. Ph.D. Thesis, University of Maryland]; Liu and Narayan [1994. Order estimation and sequential universal data compression of a hidden Markov source by the method of mixtures. Canad. J. Statist. 30(4), 573–589]; Gassiat and Boucheron [2003. Optimal error exponents in hidden Markov models order estimation. IEEE Trans. Inform. Theory 49(4), 964–980]. These inequalities lead to consistent penalized estimators that need no prior bound on the order. A simulation study and an application to postural analysis in humans are provided.  相似文献   

20.
When the unobservable Markov chain in a hidden Markov model is stationary the marginal distribution of the observations is a finite mixture with the number of terms equal to the number of the states of the Markov chain. This suggests the number of states of the unobservable Markov chain can be estimated by determining the number of mixture components in the marginal distribution. This paper presents new methods for estimating the number of states in a hidden Markov model, and coincidentally the unknown number of components in a finite mixture, based on penalized quasi‐likelihood and generalized quasi‐likelihood ratio methods constructed from the marginal distribution. The procedures advocated are simple to calculate, and results obtained in empirical applications indicate that they are as effective as current available methods based on the full likelihood. Under fairly general regularity conditions, the methods proposed generate strongly consistent estimates of the unknown number of states or components.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号