首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In this article, the M/M/k/N/N queue is modeled as a continuous-time homogeneous Markov system with finite state size capacity (HMS/cs). In order to examine the behavior of the queue a continuous-time homogeneous Markov system (HMS) constituted of two states is used. The first state of this HMS corresponds to the source and the second one to the state with the servers. The second state has a finite capacity which corresponds to the number of servers. The members of the system which can not enter the second state, due to its finite capacity, enter the buffer state which represents the system's queue. In order to examine the variability of the state sizes formulae for their factorial and mixed factorial moments are derived in matrix form. As a consequence, the pmf of each state size can be evaluated for any t ∈ ?+. The theoretical results are illustrated by a numerical example.  相似文献   

2.
Hai-Bo Yu 《随机性模型》2017,33(4):551-571
ABSTRACT

Motivated by various applications in queueing theory, this article is devoted to the stochastic monotonicity and comparability of Markov chains with block-monotone transition matrices. First, we introduce the notion of block-increasing convex order for probability vectors, and characterize the block-monotone matrices in the sense of the block-increasing order and block-increasing convex order. Second, we characterize the Markov chain with general transition matrix by martingale and provide a stochastic comparison of two block-monotone Markov chains under the two block-monotone orders. Third, the stochastic comparison results for the Markov chains corresponding to the discrete-time GI/G/1 queue with different service distributions under the two block-monotone orders are given, and the lower bound and upper bound of the Markov chain corresponding to the discrete-time GI/G/1 queue in the sense of the block-increasing convex order are found.  相似文献   

3.
《随机性模型》2013,29(1):55-69
Abstract

This paper presents an improved method to calculate the delay distribution of a type k customer in a first-come-first-serve (FCFS) discrete-time queueing system with multiple types of customers, where each type has different service requirements, and c servers, with c = 1, 2 (the MMAP[K]/PH[K]/c queue). The first algorithms to compute this delay distribution, using the GI/M/1 paradigm, were presented by Van Houdt and Blondia [Van Houdt, B.; Blondia, C. The delay distribution of a type k customer in a first come first served MMAP[K]/PH[K]/1 queue. J. Appl. Probab. 2002, 39 (1), 213–222; The waiting time distribution of a type k customer in a FCFS MMAP[K]/PH[K]/2 queue. Technical Report; 2002]. The two most limiting properties of these algorithms are: (i) the computation of the rate matrix R related to the GI/M/1 type Markov chain, (ii) the amount of memory needed to store the transition matrices A l and B l . In this paper we demonstrate that each of the three GI/M/1 type Markov chains used to develop the algorithms in the above articles can be reduced to a QBD with a block size which is only marginally larger than that of its corresponding GI/M/1 type Markov chain. As a result, the two major limiting factors of each of these algorithms are drastically reduced to computing the G matrix of the QBD and storing the 6 matrices that characterize the QBD. Moreover, these algorithms are easier to implement, especially for the system with c = 2 servers. We also include some numerical examples that further demonstrate the reduction in computational resources.  相似文献   

4.
《随机性模型》2013,29(2-3):725-744
Abstract

We propose a method to approximate the transient performance measures of a discrete time queueing system via a steady state analysis. The main idea is to approximate the system state at time slot t or on the n-th arrival–-depending on whether we are studying the transient queue length or waiting time distribution–-by the system state after a negative binomially distributed number of slots or arrivals. By increasing the number of phases k of the negative binomial distribution, an accurate approximation of the transient distribution of interest can be obtained.

In order to efficiently obtain the system state after a negative binomially distributed number of slots or arrivals, we introduce so-called reset Markov chains, by inserting reset events into the evolution of the queueing system under consideration. When computing the steady state vector of such a reset Markov chain, we exploit the block triangular block Toeplitz structure of the transition matrices involved and we directly obtain the approximation from its steady state vector. The concept of the reset Markov chains can be applied to a broad class of queueing systems and is demonstrated in full detail on a discrete-time queue with Markovian arrivals and phase-type services (i.e., the D-MAP/PH/1 queue). We focus on the queue length distribution at time t and the waiting time distribution of the n-th customer. Other distributions, e.g., the amount of work left behind by the n-th customer, that can be acquired in a similar way, are briefly touched upon.

Using various numerical examples, it is shown that the method provides good to excellent approximations at low computational costs–-as opposed to a recursive algorithm or a numerical inversion of the Laplace transform or generating function involved–-offering new perspectives to the transient analysis of practical queueing systems.  相似文献   

5.
《随机性模型》2013,29(2-3):327-341
ABSTRACT

A Markov-modulated fluid queue is a two-dimensional Markov process; the first dimension is continuous and is usually called the level, and the second is the state of a Markov process that determines the evolution of the level, it is usually called the phase. We show that it is always possible to modify the transition rules at the boundary level of the fluid queue in order to obtain independence between the level and the phase under the stationary distribution. We obtain this result by exploiting the similarity between fluid queues and Quasi-Birth-and-Death (QBD) processes.  相似文献   

6.
ABSTRACT

Phased-mission systems (PMS) can be widely found in a lot of practical application areas. Reliability evaluations and analysis for this kind of systems become important issues. The reliability of PMS is typically defined as the probability that the system successfully accomplishes the missions of all phases. However, the k-out-of-n system success criterion for PMS has not been investigated. In this paper, according to this criterion, we develop two new models, which are static and dynamic, respectively. The assumptions for these two models are described in detail as well. The system reliabilities for both models are presented for the first time by employing finite Markov chain imbedding approach (FMCIA). In terms of FMCIA, we define different state spaces for the two models, and transition probability matrices are obtained. Then some numerical examples are given to illustrate the application of FMCIA. Finally, some discussions are made and conclusions are summarized.  相似文献   

7.
Using the supplementary variable and the embedded Markov chain method, we consider a discrete-time batch arrival finite capacity queue with negative customers and working vacations, where the RCH killing policy and partial batch rejection policy are adopted. We obtain steady-state system length distributions at pre-arrival, arbitrary and outside observer’s observation epochs. Furthermore, we consider the influence of system parameters on several performance measures to demonstrate the correctness of the theoretical analysis.  相似文献   

8.
Every attainable structure of the so-called continuous-time Homogeneous Markov System (HMS) with fixed size and state space S = {1, 2,…, n} is considered as a particle of R n and, consequently, the motion of the structure corresponds to the motion of the particle. Under the assumption that “the motion of every particle-structure at every time point is due to its interaction with its surroundings,” R n becomes a continuum (Tsaklidis, 1998 Tsaklidis , G. ( 1998 ). The continuous time homogeneous Markov system with fixed size as a Newtonian fluid? Appl. Stoch. Mod. Data Anal. 13 : 177182 .[Crossref] [Google Scholar]). Then the evolution of the set of the attainable structures corresponds to the motion of the continuum. For the case of a three-state HMS it is stated that the concept of the two-dimensional isotropic elasticity can further be used to interpret the three-state HMS's evolution.  相似文献   

9.
ABSTRACT

We consider a model consisting of two fluid queues driven by the same background continuous-time Markov chain, such that the rates of change of the fluid in the second queue depend on whether the first queue is empty or not: when the first queue is nonempty, the content of the second queue increases, and when the first queue is empty, the content of the second queue decreases.

We analyze the stationary distribution of this tandem model using operator-analytic methods. The various densities (or Laplace–Stieltjes transforms thereof) and probability masses involved in this stationary distribution are expressed in terms of the stationary distribution of some embedded process. To find the latter from the (known) transition kernel, we propose a numerical procedure based on discretization and truncation. For some examples we show the method works well, although its performance is clearly affected by the quality of these approximations, both in terms of accuracy and run time.  相似文献   

10.
《随机性模型》2013,29(2-3):745-765
ABSTRACT

This paper presents two methods to calculate the response time distribution of impatient customers in a discrete-time queue with Markovian arrivals and phase-type services, in which the customers’ patience is generally distributed (i.e., the D-MAP/PH/1 queue). The first approach uses a GI/M/1 type Markov chain and may be regarded as a generalization of the procedure presented in Van Houdt [14] Van Houdt , B. ; Lenin , R. B. ; Blondia , C. Delay distribution of (im)patient customers in a discrete time D-MAP/PH/1 queue with age dependent service times Queueing Systems and Applications 2003 , 45 1 , 5973 . [CROSSREF]  [Google Scholar] for the D-MAP/PH/1 queue, where every customer has the same amount of patience. The key construction in order to obtain the response time distribution is to set up a Markov chain based on the age of the customer being served, together with the state of the D-MAP process immediately after the arrival of this customer. As a by-product, we can also easily obtain the queue length distribution from the steady state of this Markov chain.

We consider three different situations: (i) customers leave the system due to impatience regardless of whether they are being served or not, possibly wasting some service capacity, (ii) a customer is only allowed to enter the server if he is able to complete his service before reaching his critical age and (iii) customers become patient as soon as they are allowed to enter the server. In the second part of the paper, we reduce the GI/M/1 type Markov chain to a Quasi-Birth-Death (QBD) process. As a result, the time needed, in general, to calculate the response time distribution is reduced significantly, while only a relatively small amount of additional memory is needed in comparison with the GI/M/1 approach. We also include some numerical examples in which we apply the procedures being discussed.  相似文献   

11.
12.
In this article, we introduce and study Markov systems on general spaces (MSGS) as a first step of an entire theory on the subject. Also, all the concepts and basic results needed for this scope are given and analyzed. This could be thought of as an extension of the theory of a non homogeneous Markov system (NHMS) and that of a non homogeneous semi-Markov system on countable spaces, which has realized an interesting growth in the last thirty years. In addition, we study the asymptotic behaviour or ergodicity of Markov systems on general state spaces. The problem of asymptotic behaviour of Markov chains has been central for finite or countable spaces since the foundation of the subject. It has also been basic in the theory of NHMS and NHSMS. Two basic theorems are provided in answering the important problem of the asymptotic distribution of the population of the memberships of a Markov system that lives in the general space (X, ?(X)). Finally, we study the total variability from the invariant measure of the Markov system given that there exists an asymptotic behaviour. We prove a theorem which states that the total variation is finite. This problem is known also as the coupling problem.  相似文献   

13.
ABSTRACT

In this paper, we shall study a homogeneous ergodic, finite state, Markov chain with unknown transition probability matrix. Starting from the well known maximum likelihood estimator of transition probability matrix, we define estimators of reliability and its measurements. Our aim is to show that these estimators are uniformly strongly consistent and converge in distribution to normal random variables. The construction of the confidence intervals for availability, reliability, and failure rates are also given. Finally we shall give a numerical example for illustration and comparing our results with the usual empirical estimator results.  相似文献   

14.
Heterogeneous servers, in manufacturing and service systems, may have different speeds and different quality levels for the provided service or good For a two-server queueing model, we formulate the job routing problem for minimizing the stationary weighted sum of the expected time spent in the system and the number of unsatisfied customers per time unit. Using a Markov decision process approach, we prove that the optimal routing policy of jobs to service is a threshold policy that depends on the queue length. When the number of waiting jobs in the queue is below a certain threshold, only one server should work and the other one remains idle. At or above this threshold, both servers should serve jobs. This is an extension of the known result where only the heterogeneity in speed is considered.  相似文献   

15.
16.
In this paper, the maximum likelihood estimates of the parameters for the M/Er /1 queueing model are derived when the queue size at each departure point is observed. A numerical example is generated by simulating a finite Markov chain to illustrate the methodology for estimating the parameters with variable Erlang service time distribution. The problem of hypothesis testing and simultaneous Confidence regions of the parameter is also investigated.0  相似文献   

17.
《随机性模型》2013,29(4):415-437
Abstract

In this paper, we study the total workload process and waiting times in a queueing system with multiple types of customers and a first-come-first-served service discipline. An M/G/1 type Markov chain, which is closely related to the total workload in the queueing system, is constructed. A method is developed for computing the steady state distribution of that Markov chain. Using that steady state distribution, the distributions of total workload, batch waiting times, and waiting times of individual types of customers are obtained. Compared to the GI/M/1 and QBD approaches for waiting times and sojourn times in discrete time queues, the dimension of the matrix blocks involved in the M/G/1 approach can be significantly smaller.  相似文献   

18.
Abstract

In the Markov chain model of an autoregressive moving average chart, the post-transition states of nonzero transition probabilities are distributed along one-dimensional lines of a constant gradient over the state space. By considering this characteristic, we propose discretizing the state space parallel to the gradient of these one-dimensional lines. We demonstrate that our method substantially reduces the computational cost of the Markov chain approximation for the average run length in two- and three-dimensional state spaces. Also, we investigate the effect of these one-dimensional lines on the computational cost. Lastly, we generalize our method to state spaces larger than three dimensions.  相似文献   

19.
We consider conditional exact tests of factor effects in design of experiments for discrete response variables. Similarly to the analysis of contingency tables, Markov chain Monte Carlo methods can be used to perform exact tests, especially when large-sample approximations of the null distributions are poor and the enumeration of the conditional sample space is infeasible. In order to construct a connected Markov chain over the appropriate sample space, one approach is to compute a Markov basis. Theoretically, a Markov basis can be characterized as a generator of a well-specified toric ideal in a polynomial ring and is computed by computational algebraic software. However, the computation of a Markov basis sometimes becomes infeasible, even for problems of moderate sizes. In the present article, we obtain the closed-form expression of minimal Markov bases for the main effect models of 2p ? 1 fractional factorial designs of resolution p.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号