首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
ABSTRACT

This article addresses the problem of repeats detection used in the comparison of significant repeats in sequences. The case of self-overlapping leftmost repeats for large sequences generated by an homogeneous stationary Markov chain has not been treated in the literature. In this work, we are interested by the approximation of the number of self-overlapping leftmost long enough repeats distribution in an homogeneous stationary Markov chain. Using the Chen–Stein method, we show that the number of self-overlapping leftmost long enough repeats distribution is approximated by the Poisson distribution. Moreover, we show that this approximation can be extended to the case where the sequences are generated by a m-order Markov chain.  相似文献   

2.
Markov Sampling     
A discrete parameter stochastic process is observed at epochs of visits to a specified state in an independent two-state Markov chain. It is established that the family of finite dimensional distributions of the process derived in this way, referred to as Markov sampling, uniquely determines the stochastic structure of the original process. Using this identifiability, it is shown that if the derived process is Markov, then the original process is also Markov and if the derived process is strictly stationary then so is the original.  相似文献   

3.
4.
5.
We prove the large deviation principle for empirical estimators of stationary distributions of semi-Markov processes with finite state space, irreducible embedded Markov chain, and finite mean sojourn time in each state. We consider on/off Gamma sojourn processes as an illustrative example, and, in particular, continuous time Markov chains with two states. In the second case, we compare the rate function in this article with the known rate function concerning another family of empirical estimators of the stationary distribution.  相似文献   

6.
We consider Markov-dependent binary sequences and study various types of success runs (overlapping, non-overlapping, exact, etc.) by examining additive functionals based on state visits and transitions in an appropriate Markov chain. We establish a multivariate Central Limit Theorem for the number of these types of runs and obtain its covariance matrix by means of the recurrent potential matrix of the Markov chain. Explicit expressions for the covariance matrix are given in the Bernoulli and a simple Markov-dependent case by expressing the recurrent potential matrix in terms of the stationary distribution and the mean transition times in the chain. We also obtain a multivariate Central Limit Theorem for the joint number of non-overlapping runs of various sizes and give its covariance matrix in explicit form for Markov dependent trials.  相似文献   

7.
We investigate the extremal clustering behaviour of stationary time series that possess two regimes, where the switch is governed by a hidden two-state Markov chain. We also suppose that the process is conditionally Markovian in each latent regime. We prove under general assumptions that above high thresholds these models behave approximately as a random walk in one (called dominant) regime and as a stationary autoregression in the other (dominated) regime. Based on this observation, we propose an estimation and simulation scheme to analyse the extremal dependence structure of such models, taking into account only observations above high thresholds. The properties of the estimation method are also investigated. Finally, as an application, we fit a model to high-level exceedances of water discharge data, simulate extremal events from the fitted model, and show that the (model-based) flood peak, flood duration and flood volume distributions match their observed counterparts.  相似文献   

8.
We consider here ergodic homogeneous Markov chains with countable state spaces. The entropy rate of the chain is an explicit function of its transition and stationary distributions. We construct estimators for this entropy rate and for the entropy of the stationary distribution of the chain, in the parametric and nonparametric cases. We study estimation from one sample with long length and from many independent samples with given length. In the parametric case, the estimators are deduced by plug-in from the maximum likelihood estimator of the parameter. In the nonparametric case, the estimators are deduced by plug-in from the empirical estimators of the transition and stationary distributions. They are proven to have good asymptotic properties.  相似文献   

9.
The particle Gibbs sampler is a systematic way of using a particle filter within Markov chain Monte Carlo. This results in an off‐the‐shelf Markov kernel on the space of state trajectories, which can be used to simulate from the full joint smoothing distribution for a state space model in a Markov chain Monte Carlo scheme. We show that the particle Gibbs Markov kernel is uniformly ergodic under rather general assumptions, which we will carefully review and discuss. In particular, we provide an explicit rate of convergence, which reveals that (i) for fixed number of data points, the convergence rate can be made arbitrarily good by increasing the number of particles and (ii) under general mixing assumptions, the convergence rate can be kept constant by increasing the number of particles superlinearly with the number of observations. We illustrate the applicability of our result by studying in detail a common stochastic volatility model with a non‐compact state space.  相似文献   

10.
This article concerns the variance estimation in the central limit theorem for finite recurrent Markov chains. The associated variance is calculated in terms of the transition matrix of the Markov chain. We prove the equivalence of different matrix forms representing this variance. The maximum likelihood estimator for this variance is constructed and it is proved that it is strongly consistent and asymptotically normal. The main part of our analysis consists in presenting closed matrix forms for this new variance. Additionally, we prove the asymptotic equivalence between the empirical and the maximum likelihood estimation (MLE) for the stationary distribution.  相似文献   

11.
Summary.  Likelihood inference for discretely observed Markov jump processes with finite state space is investigated. The existence and uniqueness of the maximum likelihood estimator of the intensity matrix are investigated. This topic is closely related to the imbedding problem for Markov chains. It is demonstrated that the maximum likelihood estimator can be found either by the EM algorithm or by a Markov chain Monte Carlo procedure. When the maximum likelihood estimator does not exist, an estimator can be obtained by using a penalized likelihood function or by the Markov chain Monte Carlo procedure with a suitable prior. The methodology and its implementation are illustrated by examples and simulation studies.  相似文献   

12.
We devise simulation/regression numerical schemes for pricing the CVA on CDO tranches, where CVA stands for Credit Valuation Adjustment, or price correction accounting for the defaultability of a counterparty in an OTC derivatives transaction. This is done in the setup of a continuous-time Markov chain model of default times, in which dependence between credit names is represented by the possibility of simultaneous defaults. The main idea of this article is to perform the nonlinear regressions which are used for computing conditional expectations, in the time variable for a given state of the model, rather than in the space variables at a given time in diffusive setups. This idea is formalized as a lemma which is valid in any continuous-time Markov chain model. It is then implemented on the targeted application of CVA computations on CDO tranches.  相似文献   

13.
In this article, a stock-forecasting model is developed to analyze a company's stock price variation related to the Taiwanese company HTC. The main difference to previous articles is that this study uses the data of the HTC in recent ten years to build a Markov transition matrix. Instead of trying to predict the stock price variation through the traditional approach to the HTC stock problem, we integrate two types of Markov chain that are used in different ways. One is a regular Markov chain, and the other is an absorbing Markov chain. Through a regular Markov chain, we can obtain important information such as what happens in the long run or whether the distribution of the states tends to stabilize over time in an efficient way. Next, we used an artificial variable technique to create an absorbing Markov chain. Thus, we used an absorbing Markov chain to provide information about the period between the increases before arriving at the decreasing state of the HTC stock. We provide investors with information on how long the HTC stock will keep increasing before its price begins to fall, which is extremely important information to them.  相似文献   

14.
This paper considers the computation of the conditional stationary distribution in Markov chains of level-dependent M/G/1-type, given that the level is not greater than a predefined threshold. This problem has been studied recently and a computational algorithm is proposed under the assumption that matrices representing downward jumps are nonsingular. We first show that this assumption can be eliminated in a general setting of Markov chains of level-dependent G/G/1-type. Next we develop a computational algorithm for the conditional stationary distribution in Markov chains of level-dependent M/G/1-type, by modifying the above-mentioned algorithm slightly. In principle, our algorithm is applicable to any Markov chain of level-dependent M/G/1-type, if the Markov chain is irreducible and positive-recurrent. Furthermore, as an input to the algorithm, we can set an error bound for the computed conditional distribution, which is a notable feature of our algorithm. Some numerical examples are also provided.  相似文献   

15.
When some states of a Markov chain are aggregated (or lumped) and the new process, with lumped states, inherits the Markov property, the original chain is said to be lumpable. We discuss the notion of lumpability for discrete hidden Markov models (DHMMs) and we explain why, in general, testing this hypothesis leads to non-standard problems. Nevertheless, we present a case where lumpability in DHMMs is a regular problem of comparing nested models. Finally, some simulation results assessing the performance of the proposed test and an application to two real data sets are given.  相似文献   

16.
《随机性模型》2013,29(4):429-448
This paper considers subexponential asymptotics of the tail distributions of waiting times in stationary work-conserving single-server queues with multiple Markovian arrival streams, where all arrival streams are modulated by the underlying Markov chain with finite states and service time distributions may differ for different arrival streams. Under the assumption that the equilibrium distribution of the overall (i.e., customer-average) service time distribution is subexponential, a subexponential asymptotic formula is first shown for the virtual waiting time distribution, using a closed formula recently found by the author. Further when customers are served on a FIFO basis, the actual waiting time and sojourn time distributions of customers from respective arrival streams are shown to have the same asymptotics as the virtual waiting time distribution.  相似文献   

17.
Summary. We describe a model-based approach to analyse space–time surveillance data on meningococcal disease. Such data typically comprise a number of time series of disease counts, each representing a specific geographical area. We propose a hierarchical formulation, where latent parameters capture temporal, seasonal and spatial trends in disease incidence. We then add—for each area—a hidden Markov model to describe potential additional (autoregressive) effects of the number of cases at the previous time point. Different specifications for the functional form of this autoregressive term are compared which involve the number of cases in the same or in neighbouring areas. The two states of the Markov chain can be interpreted as representing an 'endemic' and a 'hyperendemic' state. The methodology is applied to a data set of monthly counts of the incidence of meningococcal disease in the 94 départements of France from 1985 to 1997. Inference is carried out by using Markov chain Monte Carlo simulation techniques in a fully Bayesian framework. We emphasize that a central feature of our model is the possibility of calculating—for each region and each time point—the posterior probability of being in a hyperendemic state, adjusted for global spatial and temporal trends, which we believe is of particular public health interest.  相似文献   

18.
We propose a new class of state space models for longitudinal discrete response data where the observation equation is specified in an additive form involving both deterministic and random linear predictors. These models allow us to explicitly address the effects of trend, seasonal or other time-varying covariates while preserving the power of state space models in modeling serial dependence in the data. We develop a Markov chain Monte Carlo algorithm to carry out statistical inference for models with binary and binomial responses, in which we invoke de Jong and Shephard’s (Biometrika 82(2):339–350, 1995) simulation smoother to establish an efficient sampling procedure for the state variables. To quantify and control the sensitivity of posteriors on the priors of variance parameters, we add a signal-to-noise ratio type parameter in the specification of these priors. Finally, we illustrate the applicability of the proposed state space mixed models for longitudinal binomial response data in both simulation studies and data examples.  相似文献   

19.
The followin paper is dedicated to a special class of stationary Markov chains. The transition probabilities are constructed from bivariate distribution functions of the Morgenstem-Type. These Markov chains are defined by their stationary distribution and a parameter a controlling the correlation between succeeding values of the chain. Relevant properties of the Markov chain are discussed. Some estimations of the parameter a are studied. The maximum likelihood estimator is compared with a simple estimator.  相似文献   

20.
Likelihood computation in spatial statistics requires accurate and efficient calculation of the normalizing constant (i.e. partition function) of the Gibbs distribution of the model. Two available methods to calculate the normalizing constant by Markov chain Monte Carlo methods are compared by simulation experiments for an Ising model, a Gaussian Markov field model and a pairwise interaction point field model.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号