首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Abstract

In this article, a finite source discrete-time queueing system is modeled as a discrete-time homogeneous Markov system with finite state size capacities (HMS/c) and transition priorities. This Markov system is comprised of three states. The first state of the HMS/c corresponds to the source and the second one to the state with the servers. The second state has a finite capacity which corresponds to the number of servers. The members of the system which can not enter the second state, due to its finite capacity, enter the third state which represents the system's queue. In order to examine the variability of the state sizes recursive formulae for their factorial and mixed factorial moments are derived in matrix form. As a consequence the probability mass function of each state size can be evaluated. Also the expected time in queue is computed by means of the interval transition probabilities. The theoretical results are illustrated by a numerical example.  相似文献   

2.
In this article, the M/M/k/N/N queue is modeled as a continuous-time homogeneous Markov system with finite state size capacity (HMS/cs). In order to examine the behavior of the queue a continuous-time homogeneous Markov system (HMS) constituted of two states is used. The first state of this HMS corresponds to the source and the second one to the state with the servers. The second state has a finite capacity which corresponds to the number of servers. The members of the system which can not enter the second state, due to its finite capacity, enter the buffer state which represents the system's queue. In order to examine the variability of the state sizes formulae for their factorial and mixed factorial moments are derived in matrix form. As a consequence, the pmf of each state size can be evaluated for any t ∈ ?+. The theoretical results are illustrated by a numerical example.  相似文献   

3.
We prove the large deviation principle for empirical estimators of stationary distributions of semi-Markov processes with finite state space, irreducible embedded Markov chain, and finite mean sojourn time in each state. We consider on/off Gamma sojourn processes as an illustrative example, and, in particular, continuous time Markov chains with two states. In the second case, we compare the rate function in this article with the known rate function concerning another family of empirical estimators of the stationary distribution.  相似文献   

4.
We discuss how the ideas of producing perfect simulations based on coupling from the past for finite state space models naturally extend to multivariate distributions with infinite or uncountable state spaces such as auto-gamma, auto-Poisson and autonegative binomial models, using Gibbs sampling in combination with sandwiching methods originally introduced for perfect simulation of point processes.  相似文献   

5.
A Markovian Decision Process (MDP) is considered in which it is not permitted to observe the state at any observation point as well as the associated cost. It is shown that for a particular class of a MDP with uncountable state space and finite action space the Howard Policy Improvement Routine (HPIR) cannot be used for finding an optimal policy. Some immediate results out of this model are presented.  相似文献   

6.
Matrix-analytic Models and their Analysis   总被引:2,自引:0,他引:2  
We survey phase-type distributions and Markovian point processes, aspects of how to use such models in applied probability calculations and how to fit them to observed data. A phase-type distribution is defined as the time to absorption in a finite continuous time Markov process with one absorbing state. This class of distributions is dense and contains many standard examples like all combinations of exponential in series/parallel. A Markovian point process is governed by a finite continuous time Markov process (typically ergodic), such that points are generated at a Poisson intensity depending on the underlying state and at transitions; a main special case is a Markov-modulated Poisson process. In both cases, the analytic formulas typically contain matrix-exponentials, and the matrix formalism carried over when the models are used in applied probability calculations as in problems in renewal theory, random walks and queueing. The statistical analysis is typically based upon the EM algorithm, viewing the whole sample path of the background Markov process as the latent variable.  相似文献   

7.
Anderson and Goodman ( 1957) have obtained the likelihood ratio tests and chi-square tests for testing the hypothesis about the order of discrete time finite Markov chains, On the similar lines we have obtained likeli¬hood ratio tests and chi-square tests (asymptotic) for testing hypotheses about the order of continuous time Markov chains (MC) with finite state space.  相似文献   

8.
We make available simple and accurate closed-form approximations to the marginal distribution of Markov-switching vector auto-regressive (MS VAR) processes. The approximation is built upon the property of MS VAR processes of being Gaussian conditionally on any semi-infinite sequence of the latent state. Truncating the semi-infinite sequence and averaging over all possible sequences of that finite length yields a mixture of normals that converges to the unknown marginal distribution as the sequence length increases. Numerical experiments confirm the viability of the approach which extends to the closely related class of MS state space models. Several applications are discussed.  相似文献   

9.
An asymptotic distribution theory for the state estimate from a Kalman filter in the absence of the usual Gaussian assumption is presented. It is found that the stability properties of the state transition matrix playa key role in the distribution theory. Specifically, when the state equation is neutrally stable (i.e., borderline stable-unstable) the state estimate is asymptotically normal when the random terms in the model have arbitrary distributions. This case includes the popular random walk state equation. However, when the state equation is either stable or unstable, at least some of the random terms in the model must be normally distributed beyond some finite time before the state estimate is asymptotically normal.  相似文献   

10.
We consider the problem of estimating the maximum posterior probability (MAP) state sequence for a finite state and finite emission alphabet hidden Markov model (HMM) in the Bayesian setup, where both emission and transition matrices have Dirichlet priors. We study a training set consisting of thousands of protein alignment pairs. The training data is used to set the prior hyperparameters for Bayesian MAP segmentation. Since the Viterbi algorithm is not applicable any more, there is no simple procedure to find the MAP path, and several iterative algorithms are considered and compared. The main goal of the paper is to test the Bayesian setup against the frequentist one, where the parameters of HMM are estimated using the training data.  相似文献   

11.
A method for efficiently calculating exact marginal, conditional and joint distributions for change points defined by general finite state Hidden Markov Models is proposed. The distributions are not subject to any approximation or sampling error once parameters of the model have been estimated. It is shown that, in contrast to sampling methods, very little computation is needed. The method provides probabilities associated with change points within an interval, as well as at specific points.  相似文献   

12.
The Sequential Probaility Ratio Test is applied to test two simple hypotheses about the transition probability matrix of an irreducible homogeneous MARKOV chain with finite state space. An analogue (14) of Wald's Fundamental Identity, the Operating Characteristic Function (20-21) and the Average Sample Number (22-23) are given. These statements are generalizations of the MARKOV chain as well as some more conditions about the eigenvalues of the transition probability matrix.  相似文献   

13.
14.
We develop new results about a sieve methodology for the estimation of minimal state spaces and probability laws in the class of stationary processes defined on finite categorical spaces. Using a sieve approximation with variable length Markov chains of increasing order, we show that an adapted version of the Context algorithm yields asymptotically correct estimates for the minimal state space and for the underlying probability distribution. As a side product, the method of sieves yields a nice graphical tree representation for the potentially infinite dimensional minimal state space of the data generating process, which is very useful for exploration of the memory.  相似文献   

15.
This article is devoted to the study of stochastic Liénard equations with random switching. The motivation of our study stems from modeling of complex systems in which both continuous dynamics and discrete events are present. The continuous component is a solution of a stochastic Liénard equation and the discrete component is a Markov chain with a finite state space that is large. A distinct feature is that the processes under consideration are time inhomogeneous. Based on the idea of nearly decomposability and aggregation, the state space of the switching process can be viewed as “nearly decomposable” into l subspaces that are connected with weak interactions among the subspaces. Using the idea of aggregation, we lump the states in each subspace into a single state. Considering the pair of process (continuous state, discrete state), under suitable conditions, we derive a weak convergence result by means of martingale problem formulation. The significance of the limit process is that it is substantially simpler than that of the original system. Thus, it can be used in the approximation and computation work to reduce the computational complexity.  相似文献   

16.
Markov Sampling     
A discrete parameter stochastic process is observed at epochs of visits to a specified state in an independent two-state Markov chain. It is established that the family of finite dimensional distributions of the process derived in this way, referred to as Markov sampling, uniquely determines the stochastic structure of the original process. Using this identifiability, it is shown that if the derived process is Markov, then the original process is also Markov and if the derived process is strictly stationary then so is the original.  相似文献   

17.
ABSTRACT

To accurately describe the performance of repairable systems operating under alternative environments, for example, mild/harsh, working/idling, maximum/minimum level demand etc., a Semi-Markov process with a finite state space and two different Semi-Markov kernels is introduced. The state set of the system which is regarded as acceptable might depend on the environments. Two important reliability indices, the availability and time to the first system failure, are obtained via Markov renewal theory, transform and matrix methods. The results and numerical examples are also provided for two special cases: (1) when sojourn times under alternative environments are constants and (2) when sojourn times under environments have exponential distributions.  相似文献   

18.
Consider a Markov chain with finite state {0, 1, …, d}. We give the generation functions (or Laplace transforms) of absorbing (passage) time in the following two situations: (1) the absorbing time of state d when the chain starts from any state i and absorbing at state d; (2) the passage time of any state i when the chain starts from the stationary distribution supposed the chain is time reversible and ergodic. Example shows that it is more convenient compared with the existing methods, especially we can calculate the expectation of the absorbing time directly.  相似文献   

19.
This article considers likelihood methods for estimating the causal effect of treatment assignment for a two-armed randomized trial assuming all-or-none treatment noncompliance and allowing for subsequent nonresponse. We first derive the observed data likelihood function as a closed form expression of the parameter given the observed data where both response and compliance state are treated as variables with missing values. Then we describe an iterative procedure which maximizes the observed data likelihood function directly to compute a maximum likelihood estimator (MLE) of the causal effect of treatment assignment. Closed form expressions at each iterative step are provided. Finally we compare the MLE with an alternative estimator where the probability distribution of the compliance state is estimated independent of the response and its missingness mechanism. Our work indicates that direct maximum likelihood inference is straightforward for this problem. Extensive simulation studies are provided to examine the finite sample performance of the proposed methods.  相似文献   

20.
《随机性模型》2013,29(4):529-552
We consider a preemptive priority fluid queue with two buffers for continuous fluid and batch fluid inputs. Those two types of fluids are governed by a Markov chain with a finite state space, called a background process, and the continuous fluid is preemptively processed over the batch fluid. The stationary joint distribution of the two buffer contents and the background state is obtained in terms of matrix transforms. Numerical computation algorithms are presented for various moments of the two buffer contents.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号