首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
2.
ABSTRACT

In this paper, we shall study a homogeneous ergodic, finite state, Markov chain with unknown transition probability matrix. Starting from the well known maximum likelihood estimator of transition probability matrix, we define estimators of reliability and its measurements. Our aim is to show that these estimators are uniformly strongly consistent and converge in distribution to normal random variables. The construction of the confidence intervals for availability, reliability, and failure rates are also given. Finally we shall give a numerical example for illustration and comparing our results with the usual empirical estimator results.  相似文献   

3.
Consider the case of classifying an incoming message as one of two known p-dimension signals or as a pure noise. Let the noise co-variance matrix (assumed to be same in all the three cases) be unknown. We consider the problem of estimation of “realized signal to noise ratio matrix”, which is an index of discriminatory power, under various loss functions. Optimum estimators are obtained under these loss functions. Finally, an attempt is made to provide a lower confidence bound for the realized signal to noise ratio matrix. In the process, the probability distribution of the smaller eigenvalue of a 2 × 2 confluent hypergeometric random matrix is obtained.  相似文献   

4.
Based on the bottlenecks in current performance-related network reliability (PRNR) research, this article selects network time delay as PRNR evaluation foundation, and defines PRNR measure as the probability that the actual network delay is no bigger than the required value during network long-term operating under the specified conditions of resource allocation and network operating environment. To obtain PRNR, a novel threshold-optimization-based network traffic model is proposed to model real network traffic. In this model, the actual network traffic is divided into two different parts—a-traffic with burst characteristic, and b-traffic with steady characteristic—according to the optimized threshold value obtained with Particle Swarm Optimization (PSO). Analysis to PRNR is carried out in two different time levels—macro-time level and micro-time level—to avoid the difficulties resulting from the great difference in reliability and performance dimension. In the macro-time level, the number of the operational network workstations varying with random failures is obtained. In the micro-time level, the packet delay is analyzed with the number of the operational workstations as a parameter. Combing the analysis in these two time levels together, the integrated PRNR model is established, and influences of different parameters are analyzed.  相似文献   

5.
In the classical approach to qualitative reliability demonstration, system failure probabilities are estimated based on a binomial sample drawn from the running production. In this paper, we show how to take account of additional available sampling information for some or even all subsystems of a current system under test with serial reliability structure. In that connection, we present two approaches, a frequentist and a Bayesian one, for assessing an upper bound for the failure probability of serial systems under binomial subsystem data. In the frequentist approach, we introduce (i) a new way of deriving the probability distribution for the number of system failures, which might be randomly assembled from the failed subsystems and (ii) a more accurate estimator for the Clopper–Pearson upper bound using a beta mixture distribution. In the Bayesian approach, however, we infer the posterior distribution for the system failure probability on the basis of the system/subsystem testing results and a prior distribution for the subsystem failure probabilities. We propose three different prior distributions and compare their performances in the context of high reliability testing. Finally, we apply the proposed methods to reduce the efforts of semiconductor burn-in studies by considering synergies such as comparable chip layers, among different chip technologies.  相似文献   

6.
A two-parameter discrete gamma distribution is derived corresponding to the continuous two parameters gamma distribution using the general approach for discretization of continuous probability distributions. One parameter discrete gamma distribution is obtained as a particular case. A few important distributional and reliability properties of the proposed distribution are examined. Parameter estimation by different methods is discussed. Performance of different estimation methods are compared through simulation. Data fitting is carried out to investigate the suitability of the proposed distribution in modeling discrete failure time data and other count data.  相似文献   

7.
The MG-procedure in ranked set sampling is studied in this paper. It is shown that the MG-procedure with any selective probability matrix provides a more efficient estimator than the sample mean based on simple random sampling. The optimum selective probability matrix in the procedure is obtained and the estimator based on it is shown to be more efficient than that studied by Yanagawa and Shirahata [5]. The median-mean estimator, which is more efficient and could be easier to apply than that proposed by McIntyre [2] and Takahashi and Wakinoto [3], is proposed when the underlying distribution function belongs to a certain subfamily of symmetric distribution functions which includes the normal, logistic and double exponential distributions among others.  相似文献   

8.
ABSTRACT

A new discrete probability distribution with integer support on (?∞, ∞) is proposed as a discrete analog of the continuous logistic distribution. Some of its important distributional and reliability properties are established. Its relationship with some known distributions is discussed. Parameter estimation by maximum-likelihood method is presented. Simulation is done to investigate properties of maximum-likelihood estimators. Real life application of the proposed distribution as empirical model is considered by conducting a comparative data fitting with Skellam distribution, Kemp's discrete normal, Roy's discrete normal, and discrete Laplace distribution.  相似文献   

9.
Storage reliability that measures the ability of products in a dormant state to keep their required functions is studied in this paper. Unlike the operational reliability, storage reliability for certain types of products may not be always 100% at the beginning of storage since there are existing possible initial failures that are normally neglected in the models of storage reliability. In this paper, a new combinatorial approach, the nonparametric measure for the estimates of the number of failed products and the current reliability at each testing time in storage, and the parametric measure for the estimates of the initial reliability and the failure rate based on the exponential reliability function, is proposed for estimating and predicting the storage reliability with possible initial failures. The proposed method has taken into consideration that the initial failure and the reliability testing data, before and during the storage process, are available for providing more accurate estimates of both initial failure probability and the probability of storage failures. When storage reliability prediction that is the main concern in this field should be made, the nonparametric estimates of failure numbers can be used into the parametric models for the failure process in storage. In the case of exponential models, the assessment and prediction method for storage reliability is provided in this paper. Finally, numerical examples are given to illustrate the method. Furthermore, a detailed comparison between the proposed method and the traditional method, for examining the rationality of assessment and prediction on the storage reliability, is presented. The results should be useful for planning a storage environment, decision-making concerning the maximum length of storage, and identifying the production quality.  相似文献   

10.
For a stochastic-flow network in which each arc has several possible capacities, we assess the probability that a given amount of data are sent through p(p ≥ 2) minimal paths simultaneously subject to time threshold. Such a probability is named the system reliability. Without knowing all minimal paths, a solution procedure is first proposed to calculate it. Furthermore, the backup-routing is established in advance to declare the first and the second priority p minimal paths in order to enhance the system reliability. Subsequently, the system reliability according to the backup-routing can be computed easily.  相似文献   

11.
We introduce an extended Burr III distribution as an important model for problems in survival analysis and reliability. The new distribution can be expressed as a linear combination of Burr III distributions and then it has tractable properties for the ordinary and incomplete moments, generating and quantile functions, mean deviations and reliability. The density of its order statistics can be given in terms of an infinite linear combination of Burr III densities. The estimation of the model parameters is approached by maximum likelihood and the observed information matrix is derived. The proposed model is applied to a real data set to illustrate its potentiality.  相似文献   

12.
Abstract

Many engineering systems have multiple components with more than one degradation measure which is dependent on each other due to their complex failure mechanisms, which results in some insurmountable difficulties for reliability work in engineering. To overcome these difficulties, the system reliability prediction approaches based on performance degradation theory develop rapidly in recent years, and show their superiority over the traditional approaches in many applications. This paper proposes reliability models of systems with two dependent degrading components. It is assumed that the degradation paths of the components are governed by gamma processes. For a parallel system, its failure probability function can be approximated by the bivariate Birnbaum–Saunders distribution. According to the relationship of parallel and series systems, it is easy to find that the failure probability function of a series system can be expressed by the bivariate Birnbaum–Saunders distribution and its marginal distributions. The model in such a situation is very complicated and analytically intractable, and becomes cumbersome from a computational viewpoint. For this reason, the Bayesian Markov chain Monte Carlo method is developed for this problem that allows the maximum likelihood estimates of the parameters to be determined in an efficient manner. After that, the confidence intervals of the failure probability of systems are given. For an illustration of the proposed model, a numerical example about railway track is presented.  相似文献   

13.
A supra-Bayesian (SB) wants to combine the information from a group of k experts to produce her distribution of a probability θ. Each expert gives his counts of what he thinks are the numbers of successes and failures in a sequence of independent trials, each with probability θ of success. These counts, used as a surrogate for each expert's own individual probability assessment (together with his associated level of confidence in his estimate), allow the SB to build various plausible conjugate models. Such models reflect her beliefs about the reliability of different experts and take account of different possible patterns of overlap of information between them. Corresponding combination rules are then obtained and compared with other more established rules and their properties examined.  相似文献   

14.
Current design practice is usually to produce a safety system which meets a target level of performance that is deemed acceptable by the regulators. Safety systems are designed to prevent or alleviate the consequences of potentially hazardous events. In many modern industries the failure of such systems can lead to whole system breakdown. In reliability analysis of complex systems involving multiple components, it is assumed that the components have different failure rates with certain probabilities. This leads into extensive computational efforts involved in using the commonly employed generating function (GF) and the recursive algorithm to obtain reliability of systems consisting of a large number of components. Moreover, when the system failure results in fatalities it is desirable for the system to achieve an optimal rather than adequate level of performance given the limitations placed on available resources. This paper concerns with developing a modified branching process joint with generating function to handle reliability evaluation of a multi-robot complex system. The availability of the system is modeled to compute the failure probability of the whole system as a performance measure. The results help decision-makers in maintenance departments to analyze critical components of the system in different time periods to prevent system breakdowns.  相似文献   

15.
When a scale matrix satisfies certain conditions, the orthant probability of the elliptically contoured distribution with the scale matrix is expressed as the same probability of the equicorrelated normal distribution.  相似文献   

16.
Suppose there are k(>= 2) treatments and each treatment is a Bernoulli process with binomial sampling. The problem of selecting a random-sized subset which contains the treatment with the largest survival probability (reliability or probability of success) is considered. Based on the ideas from both classical approaches and general Bayesian statistical decision approach, a new subset selection procedure is proposed to solve this kind of problem in both balanced and unbalanced designs. Comparing with the classical procedures, the proposed procedure has a significantly smaller selected subset. The optimal properties and performance of it were examined. The methods of selecting and fitting the priors and the results of Monte Carlo simulations on selected important cases are also studied.  相似文献   

17.
This article presents a design approach for sequential constant-stress accelerated life tests (ALT) with an auxiliary acceleration factor (AAF). The use of an AAF, if it exists, is to further amplify the failure probability of highly reliability testing items at low stress levels while maintaining an acceptable degree of extrapolation for reliability inference. Based on a Bayesian design criterion, the optimal plan optimizes the sample allocation, stress combination, as well as the loading profile of the AAF. In particular, a step-stress loading profile based on an appropriate cumulative exposure (CE) model is chosen for the AAF such that the initial auxiliary stress will not be too harsh. A case study, providing the motivation and practical importance of our study, is presented to illustrate the proposed planning approach.  相似文献   

18.
Let X1,X2,…,Xn be n normal variates with zero means, unit variances and correlation matrix {pij). The orthant probability is the probability that all of the X1's are simultaneously positive. This paper presents a general reduction method by extending the method of Childs (1967), and shows that the probability can be represented by a linear combination of some multivariate integrals of order([n/2]?1). As illustrations, we apply the proposed method to the quadrivariate and six–variate cases. Some numerical results are also given.  相似文献   

19.
Wilks’ ratio statistic can be defined in terms of the ratio of the sample generalized variances of two non-independent estimators of the same covariance matrix. Recently this statistic has been proposed as a control statistic for monitoring changes in the covariance matrix of a multivariate normal process in a Phase II situation, particularly when the dimension is larger than the sample size. In this article we derive a technique for decomposing Wilks’ ratio statistic into the product of independent factors that can be associated with the components of the covariance matrix. With these results, we demonstrate that, when a signal is detected in a control procedure for the Phase II monitoring of process variability using the ratio statistic, the signaling value can be decomposed and the process variables contributing to the signal can be specifically identified.  相似文献   

20.
Conditional Bayesian task of testing many hypotheses is stated and solved. The concept of conditionality is used for the designation of the fact that the Bayesian task is stated as a conditional optimization problem where the probability of one-type error is restricted and, under such a condition, the probability of second-type error is minimized. The offered statement gives the decision rule which allows us not to accept any hypothesis if, on the basis of the available information, it is impossible to make a decision with the set significance level. In such a case, it is necessary to ensure the additional information in the form of additional observation results or a change in the significant level of hypotheses testing. These properties make our statement more general than the usual statement of the Bayesian problem which is a special case of the one offered and improve the reliability of the made decision. The calculation results completely confirm the results of theoretical investigations.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号