首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 984 毫秒
1.
This paper considers the computation of the conditional stationary distribution in Markov chains of level-dependent M/G/1-type, given that the level is not greater than a predefined threshold. This problem has been studied recently and a computational algorithm is proposed under the assumption that matrices representing downward jumps are nonsingular. We first show that this assumption can be eliminated in a general setting of Markov chains of level-dependent G/G/1-type. Next we develop a computational algorithm for the conditional stationary distribution in Markov chains of level-dependent M/G/1-type, by modifying the above-mentioned algorithm slightly. In principle, our algorithm is applicable to any Markov chain of level-dependent M/G/1-type, if the Markov chain is irreducible and positive-recurrent. Furthermore, as an input to the algorithm, we can set an error bound for the computed conditional distribution, which is a notable feature of our algorithm. Some numerical examples are also provided.  相似文献   

2.
《随机性模型》2013,29(2):269-285
A Markov process in discrete time is perturbed by a small parameter. A perturbation theory is constructed, both for a time-dependent process, and for a stationary state. Some queueing applications are discussed.  相似文献   

3.
《随机性模型》2013,29(2-3):725-744
Abstract

We propose a method to approximate the transient performance measures of a discrete time queueing system via a steady state analysis. The main idea is to approximate the system state at time slot t or on the n-th arrival–-depending on whether we are studying the transient queue length or waiting time distribution–-by the system state after a negative binomially distributed number of slots or arrivals. By increasing the number of phases k of the negative binomial distribution, an accurate approximation of the transient distribution of interest can be obtained.

In order to efficiently obtain the system state after a negative binomially distributed number of slots or arrivals, we introduce so-called reset Markov chains, by inserting reset events into the evolution of the queueing system under consideration. When computing the steady state vector of such a reset Markov chain, we exploit the block triangular block Toeplitz structure of the transition matrices involved and we directly obtain the approximation from its steady state vector. The concept of the reset Markov chains can be applied to a broad class of queueing systems and is demonstrated in full detail on a discrete-time queue with Markovian arrivals and phase-type services (i.e., the D-MAP/PH/1 queue). We focus on the queue length distribution at time t and the waiting time distribution of the n-th customer. Other distributions, e.g., the amount of work left behind by the n-th customer, that can be acquired in a similar way, are briefly touched upon.

Using various numerical examples, it is shown that the method provides good to excellent approximations at low computational costs–-as opposed to a recursive algorithm or a numerical inversion of the Laplace transform or generating function involved–-offering new perspectives to the transient analysis of practical queueing systems.  相似文献   

4.
Hai-Bo Yu 《随机性模型》2017,33(4):551-571
ABSTRACT

Motivated by various applications in queueing theory, this article is devoted to the stochastic monotonicity and comparability of Markov chains with block-monotone transition matrices. First, we introduce the notion of block-increasing convex order for probability vectors, and characterize the block-monotone matrices in the sense of the block-increasing order and block-increasing convex order. Second, we characterize the Markov chain with general transition matrix by martingale and provide a stochastic comparison of two block-monotone Markov chains under the two block-monotone orders. Third, the stochastic comparison results for the Markov chains corresponding to the discrete-time GI/G/1 queue with different service distributions under the two block-monotone orders are given, and the lower bound and upper bound of the Markov chain corresponding to the discrete-time GI/G/1 queue in the sense of the block-increasing convex order are found.  相似文献   

5.
《随机性模型》2013,29(4):415-437
Abstract

In this paper, we study the total workload process and waiting times in a queueing system with multiple types of customers and a first-come-first-served service discipline. An M/G/1 type Markov chain, which is closely related to the total workload in the queueing system, is constructed. A method is developed for computing the steady state distribution of that Markov chain. Using that steady state distribution, the distributions of total workload, batch waiting times, and waiting times of individual types of customers are obtained. Compared to the GI/M/1 and QBD approaches for waiting times and sojourn times in discrete time queues, the dimension of the matrix blocks involved in the M/G/1 approach can be significantly smaller.  相似文献   

6.
It is well known that, in the continuous case, the probability that two consecutive order statistics are equal to zero, whereas it is not true when the distribution is discrete. It is, perhaps, for this reason that order statistics from discrete distributions has not been investigated in the literature as much as from a continuous distribution. The main purpose of this paper, therefore, is to obtain the probability of ties when the distribution is discrete. Also it is shown that, in the discrete case, the Markov property does not hold good. However, the order statistics from a geometric distribution forms a Markov chain.  相似文献   

7.
Internet traffic data is characterized by some unusual statistical properties, in particular, the presence of heavy-tailed variables. A typical model for heavy-tailed distributions is the Pareto distribution although this is not adequate in many cases. In this article, we consider a mixture of two-parameter Pareto distributions as a model for heavy-tailed data and use a Bayesian approach based on the birth-death Markov chain Monte Carlo algorithm to fit this model. We estimate some measures of interest related to the queueing system k-Par/M/1 where k-Par denotes a mixture of k Pareto distributions. Heavy-tailed variables are difficult to model in such queueing systems because of the lack of a simple expression for the Laplace Transform (LT). We use a procedure based on recent LT approximating results for the Pareto/M/1 system. We illustrate our approach with both simulated and real data.  相似文献   

8.
The length of the longest common subsequence (LCS) among two biological sequences has been used as a measure of similarity, and the application of this statistic is of importance in genomic studies. Even for the simple case of two sequences of equal length and composed of binary elements with equal state probabilities, the exact distribution of the length of the LCS remains an open question. This problem is also known as an NP-hard problem in computer science. Apart from combinatorial analysis, using the finite Markov chain imbedding technique, we derive the exact distribution for the length of the LCS between two multi-state sequences of different lengths. Numerical results are provided to illustrate the theoretical results.  相似文献   

9.
This paper studies an M/G/1 clearing queueing system with setup time and multiple vacations, in which all present customers in the system are served simultaneously and breakdowns may occur in busy or setup period. We investigate the stationary distribution of system size and the Laplace–Stieltjes transform of sojourn time. In addition, various performance measures are discussed, such as the mean system size at arbitrary time and the mean length of a vacation circle. Moreover, a cost analysis is carried out for this queueing system. Numerical results are presented to study the sensitivity of the system parameters on the expected cost function and system performances.  相似文献   

10.
We propose a new model for multivariate Markov chains of order one or higher on the basis of the mixture transition distribution (MTD) model. We call it the MTD‐Probit. The proposed model presents two attractive features: it is completely free of constraints, thereby facilitating the estimation procedure, and it is more precise at estimating the transition probabilities of a multivariate or higher‐order Markov chain than the standard MTD model.  相似文献   

11.
In this note we show that the Markov Property holds for order statistics while sampling from a discrete parent population if and only if the population has at most two distinct units. This disproves the claim of Gupta and Gupta (1981) that for geometric parent, the order statistics form a Markov chain.  相似文献   

12.
《随机性模型》2013,29(1):75-111
In this paper, we study the classification problem of discrete time and continuous time Markov processes with a tree structure. We first show some useful properties associated with the fixed points of a nondecreasing mapping. Mainly we find the conditions for a fixed point to be the minimal fixed point by using fixed point theory and degree theory. We then use these results to identify conditions for Markov chains of M/G/1 type or GI/M/1 type with a tree structure to be positive recurrent, null recurrent, or transient. The results are generalized to Markov chains of matrix M/G/1 type with a tree structure. For all these cases, a relationship between a certain fixed point, the matrix of partial differentiation (Jacobian) associated with the fixed point, and the classification of the Markov chain with a tree structure is established. More specifically, we show that the Perron-Frobenius eigenvalue of the matrix of partial differentiation associated with a certain fixed point provides information for a complete classification of the Markov chains of interest.  相似文献   

13.
《随机性模型》2013,29(2-3):279-302
ABSTRACT

By using properties of canonical factorizations, we prove that under very mild assumptions, the shifted cyclic reduction method (SCR) can be applied for solving QBD problems with no breakdown and that it always converges. For general M/G/1 type Markov chains we prove that SCR always converges if no breakdown is encountered. Numerical experiments showing the acceleration provided by SCR versus cyclic reduction are presented.  相似文献   

14.
We consider a bootstrap method for Markov chains where the original chain is broken into a (random) number of cycles based on an atom (regeneration point) and the bootstrap scheme resamples from these cycles. We investigate the asymptotic accuracy of this method for the case of a sum (or a sample mean) related to the Markov chain. Under some standard moment conditions, the method is shown to be at least as good as the normal approximation, and better (second-order accurate) in the case of nonlattice summands. We give three examples to illustrate the applicability of our results.  相似文献   

15.
We first introduce fuzzy finite Markov chains and present some of their fundamental properties based on possibility theory. We also bring in a way to convert fuzzy Markov chains to classic Markov chains. In addition, we simulate fuzzy Markov chain using different sizes. It is observed that the most of fuzzy Markov chains not only do have an ergodic behavior, but also they are periodic. Finally, using Halton quasi-random sequence we generate some fuzzy Markov chains, which are compared with the ones generated by the RAND function of MATLAB. Therefore, we improve the periodicity behavior of fuzzy Markov chains.  相似文献   

16.
The order statistics from a sample of size n≥3 from a discrete distribution form a Markov chain if and only if the parent distribution is supported by one or two points. More generally, a necessary and sufficient condition for the order statistics to form a Markov chain for (n≥3) is that there does not exist any atom x0 of the parent distribution F satisfying F(x0-)>0 and F(x0)<1. To derive this result a formula for the joint distribution of order statistics is proved, which is of an interest on its own. Many exponential characterizations implicitly assume the Markov property. The corresponding putative geometric characterizations cannot then be reasonably expected to obtain. Some illustrative geometric characterizations are discussed.  相似文献   

17.
Let Nn be the number of occurrences in n trials of an event governed by a two-state Markov chain (of first order or second order). We obtain the distribution of Nn and apply it to a problem involving literary text.  相似文献   

18.
We consider Markov-dependent binary sequences and study various types of success runs (overlapping, non-overlapping, exact, etc.) by examining additive functionals based on state visits and transitions in an appropriate Markov chain. We establish a multivariate Central Limit Theorem for the number of these types of runs and obtain its covariance matrix by means of the recurrent potential matrix of the Markov chain. Explicit expressions for the covariance matrix are given in the Bernoulli and a simple Markov-dependent case by expressing the recurrent potential matrix in terms of the stationary distribution and the mean transition times in the chain. We also obtain a multivariate Central Limit Theorem for the joint number of non-overlapping runs of various sizes and give its covariance matrix in explicit form for Markov dependent trials.  相似文献   

19.
This paper gives the definition of tree-indexed Markov chains in random environment with discrete state space, and then studies some equivalent theorems of tree-indexed Markov chains in random environment. Finally, we give the equivalence on tree-indexed Markov chains in Markov environment and double Markov chains indexed by a tree.  相似文献   

20.
ABSTRACT

The aim of this note is to investigate the concentration properties of unbounded functions of geometrically ergodic Markov chains. We derive concentration properties of centred functions with respect to the square of Lyapunov's function in the drift condition satisfied by the Markov chain. We apply the new exponential inequalities to derive confidence intervals for Markov Chain Monte Carlo algorithms. Quantitative error bounds are provided for the regenerative Metropolis algorithm of [Brockwell and Kadane Identification of regeneration times in MCMC simulation, with application to adaptive schemes. J Comput Graphical Stat. 2005;14(2)].  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号