首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
When the unobservable Markov chain in a hidden Markov model is stationary the marginal distribution of the observations is a finite mixture with the number of terms equal to the number of the states of the Markov chain. This suggests the number of states of the unobservable Markov chain can be estimated by determining the number of mixture components in the marginal distribution. This paper presents new methods for estimating the number of states in a hidden Markov model, and coincidentally the unknown number of components in a finite mixture, based on penalized quasi‐likelihood and generalized quasi‐likelihood ratio methods constructed from the marginal distribution. The procedures advocated are simple to calculate, and results obtained in empirical applications indicate that they are as effective as current available methods based on the full likelihood. Under fairly general regularity conditions, the methods proposed generate strongly consistent estimates of the unknown number of states or components.  相似文献   

2.
A Markov Renewal Process (M.R.P.) is one which records at each time t , the number of times a system visits each of m states in time t , if the transitions from state to state are according to a Markov chain and if the time required for each successive move is a random variable whose distribution function (d.f.) depends on the two states between which the move is made. In this paper, the distribution of the number of times each state is visited in an arbitrary interval (t0, t0+t) is derived. Asymptotic expressions for the mean and variance of this distribution are also obtained.  相似文献   

3.
In this article, a stock-forecasting model is developed to analyze a company's stock price variation related to the Taiwanese company HTC. The main difference to previous articles is that this study uses the data of the HTC in recent ten years to build a Markov transition matrix. Instead of trying to predict the stock price variation through the traditional approach to the HTC stock problem, we integrate two types of Markov chain that are used in different ways. One is a regular Markov chain, and the other is an absorbing Markov chain. Through a regular Markov chain, we can obtain important information such as what happens in the long run or whether the distribution of the states tends to stabilize over time in an efficient way. Next, we used an artificial variable technique to create an absorbing Markov chain. Thus, we used an absorbing Markov chain to provide information about the period between the increases before arriving at the decreasing state of the HTC stock. We provide investors with information on how long the HTC stock will keep increasing before its price begins to fall, which is extremely important information to them.  相似文献   

4.
《随机性模型》2013,29(2):229-243
We study an inventory model for perishable products with a critical-number ordering policy under the assumption that demand for the product forms an i.i.d. sequence, so that the state of the system forms a Markov chain. Explicit calculation of the stationary distribution has proved impractical in cases where items have reasonably long lifetimes and for systems with large under-up-to levels. Using the recently developed coupling-from-the-past method, we introduce a technique to estimate the stationary distribution of the Markov chain via perfect simulation. The Markov chain that results from the use of a critical-number policy is particularly amenable to these simulation techniques, despite not being ordered in its initial state, since the recursive equations satisfied by the Markov chain enable us to identify specific demand patterns where the backward coupling occurs.  相似文献   

5.
The particle Gibbs sampler is a systematic way of using a particle filter within Markov chain Monte Carlo. This results in an off‐the‐shelf Markov kernel on the space of state trajectories, which can be used to simulate from the full joint smoothing distribution for a state space model in a Markov chain Monte Carlo scheme. We show that the particle Gibbs Markov kernel is uniformly ergodic under rather general assumptions, which we will carefully review and discuss. In particular, we provide an explicit rate of convergence, which reveals that (i) for fixed number of data points, the convergence rate can be made arbitrarily good by increasing the number of particles and (ii) under general mixing assumptions, the convergence rate can be kept constant by increasing the number of particles superlinearly with the number of observations. We illustrate the applicability of our result by studying in detail a common stochastic volatility model with a non‐compact state space.  相似文献   

6.
In this article, we introduce a two-state homogeneous Markov chain and define a geometric distribution related to this Markov chain. We define also the negative binomial distribution similar to the classical case and call it NB related to interrupted Markov chain. The new binomial distribution is related to the interrupted Markov chain. Some characterization properties of the geometric distributions are given. Recursion formulas and probability mass functions for the NB distribution and the new binomial distribution are derived.  相似文献   

7.
Optimal statistical process control (SPC) requires models of both in-control and out-of-control process states. Whereas a normal distribution is the generally accepted model for the in-control state, there is a doubt as to the existence of reliable models for out-of-control cases. Various process models, available in the literature, for discrete manufacturing systems (parts industry) can be treated as bounded discrete-space Markov chains, completely characterized by the original in-control state and a transition matrix for shifts to an out-of-control state. The present work extends these models by using a continuous-state Markov chain, incorporating non-random corrective actions. These actions are to be realized according to the SPC technique and should substantially affect the model. The developed stochastic model yields a Laplace distribution of a process mean. An alternative approach, based on the Information theory, also results in a Laplace distribution. Real-data tests confirm the applicability of a Laplace distribution for the parts industry and show that the distribution parameter is mainly controlled by the SPC sample size.  相似文献   

8.
We prove the large deviation principle for empirical estimators of stationary distributions of semi-Markov processes with finite state space, irreducible embedded Markov chain, and finite mean sojourn time in each state. We consider on/off Gamma sojourn processes as an illustrative example, and, in particular, continuous time Markov chains with two states. In the second case, we compare the rate function in this article with the known rate function concerning another family of empirical estimators of the stationary distribution.  相似文献   

9.
We consider Markov-dependent binary sequences and study various types of success runs (overlapping, non-overlapping, exact, etc.) by examining additive functionals based on state visits and transitions in an appropriate Markov chain. We establish a multivariate Central Limit Theorem for the number of these types of runs and obtain its covariance matrix by means of the recurrent potential matrix of the Markov chain. Explicit expressions for the covariance matrix are given in the Bernoulli and a simple Markov-dependent case by expressing the recurrent potential matrix in terms of the stationary distribution and the mean transition times in the chain. We also obtain a multivariate Central Limit Theorem for the joint number of non-overlapping runs of various sizes and give its covariance matrix in explicit form for Markov dependent trials.  相似文献   

10.
Summary. We describe a model-based approach to analyse space–time surveillance data on meningococcal disease. Such data typically comprise a number of time series of disease counts, each representing a specific geographical area. We propose a hierarchical formulation, where latent parameters capture temporal, seasonal and spatial trends in disease incidence. We then add—for each area—a hidden Markov model to describe potential additional (autoregressive) effects of the number of cases at the previous time point. Different specifications for the functional form of this autoregressive term are compared which involve the number of cases in the same or in neighbouring areas. The two states of the Markov chain can be interpreted as representing an 'endemic' and a 'hyperendemic' state. The methodology is applied to a data set of monthly counts of the incidence of meningococcal disease in the 94 départements of France from 1985 to 1997. Inference is carried out by using Markov chain Monte Carlo simulation techniques in a fully Bayesian framework. We emphasize that a central feature of our model is the possibility of calculating—for each region and each time point—the posterior probability of being in a hyperendemic state, adjusted for global spatial and temporal trends, which we believe is of particular public health interest.  相似文献   

11.
《随机性模型》2013,29(4):415-437
Abstract

In this paper, we study the total workload process and waiting times in a queueing system with multiple types of customers and a first-come-first-served service discipline. An M/G/1 type Markov chain, which is closely related to the total workload in the queueing system, is constructed. A method is developed for computing the steady state distribution of that Markov chain. Using that steady state distribution, the distributions of total workload, batch waiting times, and waiting times of individual types of customers are obtained. Compared to the GI/M/1 and QBD approaches for waiting times and sojourn times in discrete time queues, the dimension of the matrix blocks involved in the M/G/1 approach can be significantly smaller.  相似文献   

12.
Simulated annealing—moving from a tractable distribution to a distribution of interest via a sequence of intermediate distributions—has traditionally been used as an inexact method of handling isolated modes in Markov chain samplers. Here, it is shown how one can use the Markov chain transitions for such an annealing sequence to define an importance sampler. The Markov chain aspect allows this method to perform acceptably even for high-dimensional problems, where finding good importance sampling distributions would otherwise be very difficult, while the use of importance weights ensures that the estimates found converge to the correct values as the number of annealing runs increases. This annealed importance sampling procedure resembles the second half of the previously-studied tempered transitions, and can be seen as a generalization of a recently-proposed variant of sequential importance sampling. It is also related to thermodynamic integration methods for estimating ratios of normalizing constants. Annealed importance sampling is most attractive when isolated modes are present, or when estimates of normalizing constants are required, but it may also be more generally useful, since its independent sampling allows one to bypass some of the problems of assessing convergence and autocorrelation in Markov chain samplers.  相似文献   

13.
A new Markov chain Monte Carlo method for the Bayesian analysis of finite mixture distributions with an unknown number of components is presented. The sampler is characterized by a state space consisting only of the number of components and the latent allocation variables. Its main advantage is that it can be used, with minimal changes, for mixtures of components from any parametric family, under the assumption that the component parameters can be integrated out of the model analytically. Artificial and real data sets are used to illustrate the method and mixtures of univariate and of multivariate normals are explicitly considered. The problem of label switching, when parameter inference is of interest, is addressed in a post-processing stage.  相似文献   

14.
Hidden Markov models (HMMs) have been shown to be a flexible tool for modelling complex biological processes. However, choosing the number of hidden states remains an open question and the inclusion of random effects also deserves more research, as it is a recent addition to the fixed-effect HMM in many application fields. We present a Bayesian mixed HMM with an unknown number of hidden states and fixed covariates. The model is fitted using reversible-jump Markov chain Monte Carlo, avoiding the need to select the number of hidden states. We show through simulations that the estimations produced are more precise than those from a fixed-effect HMM and illustrate its practical application to the analysis of DNA copy number data, a field where HMMs are widely used.  相似文献   

15.
A new Bayesian state and parameter learning algorithm for multiple target tracking models with image observations are proposed. Specifically, a Markov chain Monte Carlo algorithm is designed to sample from the posterior distribution of the unknown time-varying number of targets, their birth, death times and states as well as the model parameters, which constitutes the complete solution to the specific tracking problem we consider. The conventional approach is to pre-process the images to extract point observations and then perform tracking, i.e. infer the target trajectories. We model the image generation process directly to avoid any potential loss of information when extracting point observations using a pre-processing step that is decoupled from the inference algorithm. Numerical examples show that our algorithm has improved tracking performance over commonly used techniques, for both synthetic examples and real florescent microscopy data, especially in the case of dim targets with overlapping illuminated regions.  相似文献   

16.
Abstract

In the Markov chain model of an autoregressive moving average chart, the post-transition states of nonzero transition probabilities are distributed along one-dimensional lines of a constant gradient over the state space. By considering this characteristic, we propose discretizing the state space parallel to the gradient of these one-dimensional lines. We demonstrate that our method substantially reduces the computational cost of the Markov chain approximation for the average run length in two- and three-dimensional state spaces. Also, we investigate the effect of these one-dimensional lines on the computational cost. Lastly, we generalize our method to state spaces larger than three dimensions.  相似文献   

17.
ABSTRACT

This article addresses the problem of repeats detection used in the comparison of significant repeats in sequences. The case of self-overlapping leftmost repeats for large sequences generated by an homogeneous stationary Markov chain has not been treated in the literature. In this work, we are interested by the approximation of the number of self-overlapping leftmost long enough repeats distribution in an homogeneous stationary Markov chain. Using the Chen–Stein method, we show that the number of self-overlapping leftmost long enough repeats distribution is approximated by the Poisson distribution. Moreover, we show that this approximation can be extended to the case where the sequences are generated by a m-order Markov chain.  相似文献   

18.
The hidden Markov model (HMM) provides an attractive framework for modeling long-term persistence in a variety of applications including pattern recognition. Unlike typical mixture models, hidden Markov states can represent the heterogeneity in data and it can be extended to a multivariate case using a hierarchical Bayesian approach. This article provides a nonparametric Bayesian modeling approach to the multi-site HMM by considering stick-breaking priors for each row of an infinite state transition matrix. This extension has many advantages over a parametric HMM. For example, it can provide more flexible information for identifying the structure of the HMM than parametric HMM analysis, such as the number of states in HMM. We exploit a simulation example and a real dataset to evaluate the proposed approach.  相似文献   

19.
Propp and Wilson (Random Structures and Algorithms (1996) 9: 223–252, Journal of Algorithms (1998) 27: 170–217) described a protocol called coupling from the past (CFTP) for exact sampling from the steady-state distribution of a Markov chain Monte Carlo (MCMC) process. In it a past time is identified from which the paths of coupled Markov chains starting at every possible state would have coalesced into a single value by the present time; this value is then a sample from the steady-state distribution.Unfortunately, producing an exact sample typically requires a large computational effort. We consider the question of how to make efficient use of the sample values that are generated. In particular, we make use of regeneration events (cf. Mykland et al. Journal of the American Statistical Association (1995) 90: 233–241) to aid in the analysis of MCMC runs. In a regeneration event, the chain is in a fixed reference distribution– this allows the chain to be broken up into a series of tours which are independent, or nearly so (though they do not represent draws from the true stationary distribution).In this paper we consider using the CFTP and related algorithms to create tours. In some cases their elements are exactly in the stationary distribution; their length may be fixed or random. This allows us to combine the precision of exact sampling with the efficiency of using entire tours.Several algorithms and estimators are proposed and analysed.  相似文献   

20.
Markov Sampling     
A discrete parameter stochastic process is observed at epochs of visits to a specified state in an independent two-state Markov chain. It is established that the family of finite dimensional distributions of the process derived in this way, referred to as Markov sampling, uniquely determines the stochastic structure of the original process. Using this identifiability, it is shown that if the derived process is Markov, then the original process is also Markov and if the derived process is strictly stationary then so is the original.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号