首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 93 毫秒
1.
ABSTRACT

A hybrid censoring is a mixture of Type I and Type II censoring where the experiment terminates when either rth failure or predetermined censoring time comes first or later. In this article, we consider order statistics of the Type I censored data and provide a simple expression for their Kullback–Leibler (KL) information. Then, we provide the expressions for the KL information of the Type I and Type II hybrid censored data.  相似文献   

2.
《随机性模型》2013,29(4):467-482
Abstract

In this paper, we show that an arbitrary tree structured quasi‐birth–death (QBD) Markov chain can be embedded in a tree‐like QBD process with a special structure. Moreover, we present an algebraic proof that applying the natural fixed point iteration (FPI) to the nonlinear matrix equation V = B + ∑ s=1 d U s (I ? V)?1 D s that solves the tree‐like QBD process, is equivalent to the more complicated iterative algorithm presented by Yeung and Alfa (1996).  相似文献   

3.
For the complete sample and the right Type II censored sample, Chen [Joint confidence region for the parameters of Pareto distribution. Metrika 44 (1996), pp. 191–197] proposed the interval estimation of the parameter θ and the joint confidence region of the two parameters of Pareto distribution. This paper proposed two methods to construct the confidence region of the two parameters of the Pareto distribution for the progressive Type II censored sample. A simulation study comparing the performance of the two methods is done and concludes that Method 1 is superior to Method 2 by obtaining a smaller confidence area. The interval estimation of parameter ν is also given under progressive Type II censoring. In addition, the predictive intervals of the future observation and the ratio of the two future consecutive failure times based on the progressive Type II censored sample are also proposed. Finally, one example is given to illustrate all interval estimations in this paper.  相似文献   

4.
In this study, the performances of linear regression techniques, which are especially used in clinical chemistry in method comparison studies, are compared via the Monte-Carlo simulation. The regression techniques that take the measurement errors of both dependent and independent variables into account are called Type II regression techniques. In this study, we also compare the performances of Type II and Type I (classical regression techniques that do not take the measurement errors of the independent variable into account) regression techniques for different sample sizes and different shape parameters of the Weibull distribution. The mean square error is used as a performance criterion of each technique. MATLAB 7.02 software is used in the simulation study. As a result, in all conditions, the ordinary least-square (OLS)-bisector regression technique, which bisects the OLS(Y | X) and the OLS(X | Y), shows the best performance.  相似文献   

5.
6.
《随机性模型》2013,29(1):77-99
Abstract

In this paper, we present sufficient conditions, under which the stationary probability vector of a QBD process with both infinite levels and phases decays geometrically, characterized by the convergence norm η and the 1/η-left-invariant vector x of the rate matrix R. We also present a method to compute η and x based on spectral properties of the censored matrix of a matrix function constructed with the repeating blocks of the transition matrix of the QBD process. What makes this method attractive is its simplicity; finding η reduces to determining the zeros of a polynomial. We demonstrate the application of our method through a few interesting examples.  相似文献   

7.
Queues with Markovian arrival and service processes, i.e., MAP/MAP/1 queues, have been useful in the analysis of computer and communication systems and different representations for their stationary sojourn time and queue length distribution have been derived. More specifically, the class of MAP/MAP/1 queues lies at the intersection of the class of QBD queues and the class of semi-Markovian queues. While QBD queues have a matrix exponential representation for their queue length and sojourn time distribution of order N and N2, respectively, where N is the size of the background continuous time Markov chain, the reverse is true for a semi-Markovian queue. As the class of MAP/MAP/1 queues lies at the intersection, both the queue length and sojourn time distribution of a MAP/MAP/1 queue has an order N matrix exponential representation. The aim of this article is to understand why the order N2 distributions of the sojourn time of a QBD queue and the queue length of a semi-Markovian queue can be reduced to an order N distribution in the specific case of a MAP/MAP/1 queue. We show that the key observation exists in establishing the commutativity of some fundamental matrices involved in the analysis of the MAP/MAP/1 queue.  相似文献   

8.
The present Monte Carlo simulation study adds to the literature by analyzing parameter bias, rates of Type I and Type II error, and variance inflation factor (VIF) values produced under various multicollinearity conditions by multiple regressions with two, four, and six predictors. Findings indicate multicollinearity is unrelated to Type I error, but increases Type II error. Investigation of bias suggests that multicollinearity increases the variability in parameter bias, while leading to overall underestimation of parameters. Collinearity also increases VIF. In the case of all diagnostics however, increasing the number of predictors interacts with multicollinearity to compound observed problems.  相似文献   

9.
《随机性模型》2013,29(1):55-69
Abstract

This paper presents an improved method to calculate the delay distribution of a type k customer in a first-come-first-serve (FCFS) discrete-time queueing system with multiple types of customers, where each type has different service requirements, and c servers, with c = 1, 2 (the MMAP[K]/PH[K]/c queue). The first algorithms to compute this delay distribution, using the GI/M/1 paradigm, were presented by Van Houdt and Blondia [Van Houdt, B.; Blondia, C. The delay distribution of a type k customer in a first come first served MMAP[K]/PH[K]/1 queue. J. Appl. Probab. 2002, 39 (1), 213–222; The waiting time distribution of a type k customer in a FCFS MMAP[K]/PH[K]/2 queue. Technical Report; 2002]. The two most limiting properties of these algorithms are: (i) the computation of the rate matrix R related to the GI/M/1 type Markov chain, (ii) the amount of memory needed to store the transition matrices A l and B l . In this paper we demonstrate that each of the three GI/M/1 type Markov chains used to develop the algorithms in the above articles can be reduced to a QBD with a block size which is only marginally larger than that of its corresponding GI/M/1 type Markov chain. As a result, the two major limiting factors of each of these algorithms are drastically reduced to computing the G matrix of the QBD and storing the 6 matrices that characterize the QBD. Moreover, these algorithms are easier to implement, especially for the system with c = 2 servers. We also include some numerical examples that further demonstrate the reduction in computational resources.  相似文献   

10.
Type I and Type II censored data arise frequently in controlled laboratory studies concerning time to a particular event (e.g., death of an animal or failure of a physical device). Log-location-scale distributions (e.g., Weibull, lognormal, and loglogistic) are commonly used to model the resulting data. Maximum likelihood (ML) is generally used to obtain parameter estimates when the data are censored. The Fisher information matrix can be used to obtain large-sample approximate variances and covariances of the ML estimates or to estimate these variances and covariances from data. The derivations of the Fisher information matrix proceed differently for Type I (time censoring) and Type II (failure censoring) because the number of failures is random in Type I censoring, but length of the data collection period is random in Type II censoring. Under regularity conditions (met with the above-mentioned log-location-scale distributions), we outline the different derivations and show that the Fisher information matrices for Type I and Type II censoring are asymptotically equivalent.  相似文献   

11.
Making predictions of future realized values of random variables based on currently available data is a frequent task in statistical applications. In some applications, the interest is to obtain a two-sided simultaneous prediction interval (SPI) to contain at least k out of m future observations with a certain confidence level based on n previous observations from the same distribution. A closely related problem is to obtain a one-sided upper (or lower) simultaneous prediction bound (SPB) to exceed (or be exceeded) by at least k out of m future observations. In this paper, we provide a general approach for computing SPIs and SPBs based on data from a particular member of the (log)-location-scale family of distributions with complete or right censored data. The proposed simulation-based procedure can provide exact coverage probability for complete and Type II censored data. For Type I censored data, our simulation results show that our procedure provides satisfactory results in small samples. We use three applications to illustrate the proposed simultaneous prediction intervals and bounds.  相似文献   

12.
ABSTRACT

Background: Many exposures in epidemiological studies have nonlinear effects and the problem is to choose an appropriate functional relationship between such exposures and the outcome. One common approach is to investigate several parametric transformations of the covariate of interest, and to select a posteriori the function that fits the data the best. However, such approach may result in an inflated Type I error. Methods: Through a simulation study, we generated data from Cox's models with different transformations of a single continuous covariate. We investigated the Type I error rate and the power of the likelihood ratio test (LRT) corresponding to three different procedures that considered the same set of parametric dose-response functions. The first unconditional approach did not involve any model selection, while the second conditional approach was based on a posteriori selection of the parametric function. The proposed third approach was similar to the second except that it used a corrected critical value for the LRT to ensure a correct Type I error. Results: The Type I error rate of the second approach was two times higher than the nominal size. For simple monotone dose-response, the corrected test had similar power as the unconditional approach, while for non monotone, dose-response, it had a higher power. A real-life application that focused on the effect of body mass index on the risk of coronary heart disease death, illustrated the advantage of the proposed approach. Conclusion: Our results confirm that a posteriori selecting the functional form of the dose-response induces a Type I error inflation. The corrected procedure, which can be applied in a wide range of situations, may provide a good trade-off between Type I error and power.  相似文献   

13.
Based on progressively Type II censored samples, we consider the estimation of R = P(Y < X) when X and Y are two independent Weibull distributions with different shape parameters, but having the same scale parameter. The maximum likelihood estimator, approximate maximum likelihood estimator, and Bayes estimator of R are obtained. Based on the asymptotic distribution of R, the confidence interval of R are obtained. Two bootstrap confidence intervals are also proposed. Analysis of a real data set is given for illustrative purposes. Monte Carlo simulations are also performed to compare the different proposed methods.  相似文献   

14.
A Monte Carlo simulation evaluated five pairwise multiple comparison procedures for controlling Type I error rates, any-pair power, and all-pairs power. Realistic conditions of non-normality were based on a previous survey. Variance ratios were varied from 1:1 to 64:1. Procedures evaluated included Tukey's honestly significant difference (HSD) preceded by an F test, the Hayter–Fisher, the Games–Howell preceded by an F test, the Pertiz with F tests, and the Peritz with Alexander–Govern tests. Tukey's procedure shows the greatest robustness in Type I error control. Any-pair power is generally best with one of the Peritz procedures. All-pairs power is best with the Pertiz F test procedure. However, Tukey's HSD preceded by the Alexander–Govern F test may provide the best combination for controlling Type I and power rates in a variety of conditions of non-normality and variance heterogeneity.  相似文献   

15.
Consider k (≥2) independent Type I extreme value populations with unknown location parameters and common known scale parameter. With samples of same size, we study procedures based on the sample means for (1) selecting the population having the largest location parameter, (2) selecting the population having the smallest location parameter, and (3) testing for equality of all the location parameters. We use Bechhofer's indifference-zone and Gupta's subset selection formulations. Tables of constants for implemention are provided based on approximation for the distribution of the standardized sample mean by a generalized Tukey's lambda distribution. Examples are provided for all procedures.  相似文献   

16.
In terms of the risk of making a Type I error in evaluating a null hypothesis of equality, requiring two independent confirmatory trials with two‐sided p‐values less than 0.05 is equivalent to requiring one confirmatory trial with two‐sided p‐value less than 0.001 25. Furthermore, the use of a single confirmatory trial is gaining acceptability, with discussion in both ICH E9 and a CPMP Points to Consider document. Given the growing acceptance of this approach, this note provides a formula for the sample size savings that are obtained with the single clinical trial approach depending on the levels of Type I and Type II errors chosen. For two replicate trials each powered at 90%, which corresponds to a single larger trial powered at 81%, an approximate 19% reduction in total sample size is achieved with the single trial approach. Alternatively, a single trial with the same sample size as the total sample size from two smaller trials will have much greater power. For example, in the case where two trials are each powered at 90% for two‐sided α=0.05 yielding an overall power of 81%, a single trial using two‐sided α=0.001 25 would have 91% power. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

17.
This paper deals with the problem of interval estimation of the scale parameter in the two-parameter exponential distribution subject to Type II double censoring. Base on a Type II doubly censored sample, we construct a class of interval estimators of the scale parameter which are better than the shortest length affine equivariant interval both in coverage probability and in length. The procedure can be repeated to make further improvement. The extension of the method leads to a smoothly improved confidence interval which improves the interval length with probability one. All improved intervals belong to the class of scale equivariant intervals.  相似文献   

18.
《随机性模型》2013,29(1):159-171
Generalized inverses of I?P, where P is a stochastic matrix, play an important role in the theory of Markov chains. In particular, the group inverse (I?P)# has a probabilistic interpretation and is well suited for algorithmic implementation. We determine (I?P)# for finite homogeneous quasi-birth-and-death (QBD) processes by exploiting both the structure of the process and the probabilistic properties of the group inverse.  相似文献   

19.
Summary The exact distributions of the productXY are derived whenX andY are independent random variables and come from the extreme value distribution of Type I, the extreme value distribution of Type II or the extreme value distribution of Type III. Of the, six possible combinations, only three yield closed-form expressions for the distribution ofXY. A detailed application of the results is provided to drought data from Nebraska. The author would like to thank the referees and the Associate Editor for carefully reading the paper and for their great help in improving the paper.  相似文献   

20.
In this paper, we propose the application of group screening methods for analyzing data using E(fNOD)-optimal mixed-level supersaturated designs possessing the equal occurrence property. Supersaturated designs are a large class of factorial designs which can be used for screening out the important factors from a large set of potentially active variables. The huge advantage of these designs is that they reduce the experimental cost drastically, but their critical disadvantage is the high degree of confounding among factorial effects. Based on the idea of the group screening methods, the f factors are sub-divided into g “group-factors”. The “group-factors” are then studied using the penalized likelihood statistical analysis methods at a factorial design with orthogonal or near-orthogonal columns. All factors in groups found to have a large effect are then studied in a second stage of experiments. A comparison of the Type I and Type II error rates of various estimation methods via simulation experiments is performed. The results are presented in tables and discussion follows.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号