首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
In this paper we consider a Markovian perfect debugging model for which the software failure is caused by two types of faults, one which is easily detected and the other which is difficult to detect. When a failure occurs, a perfect debugging is immediately performed and consequently one fault is reduced from fault contents. We also treat the debugging time as a variable to develop a new debugging model. Based on the perfect debugging model, we propose an optimal software release policy that satisfies the requirements for both software reliability and expected number of faults which are required to achieve before releasing the software. Several measures, including the distribution of first passage time to the specified number of removed faults, are also obtained using the proposed debugging model.  相似文献   

2.
Current design practice is usually to produce a safety system which meets a target level of performance that is deemed acceptable by the regulators. Safety systems are designed to prevent or alleviate the consequences of potentially hazardous events. In many modern industries the failure of such systems can lead to whole system breakdown. In reliability analysis of complex systems involving multiple components, it is assumed that the components have different failure rates with certain probabilities. This leads into extensive computational efforts involved in using the commonly employed generating function (GF) and the recursive algorithm to obtain reliability of systems consisting of a large number of components. Moreover, when the system failure results in fatalities it is desirable for the system to achieve an optimal rather than adequate level of performance given the limitations placed on available resources. This paper concerns with developing a modified branching process joint with generating function to handle reliability evaluation of a multi-robot complex system. The availability of the system is modeled to compute the failure probability of the whole system as a performance measure. The results help decision-makers in maintenance departments to analyze critical components of the system in different time periods to prevent system breakdowns.  相似文献   

3.
Software reliability assessment using accelerated testing methods   总被引:1,自引:0,他引:1  
The use of operational profiles and usage-based testing has received considerable attention recently in the software engineering literature. Testing under the actual operational profile can, however, be expensive, time consuming or even infeasible in situations where the performance of a system is dominated by infrequent but highly critical events. We consider a real application that deals with telecommunications network restoration after network failure caused by cuts in fibre optic cables. We use this application to demonstrate the usefulness of traditional accelerated testing methods to test and estimate software reliability. These methods, which have been extensively used in hardware reliability, have an important role to play in software reliability assessment as well.  相似文献   

4.
Based on the bottlenecks in current performance-related network reliability (PRNR) research, this article selects network time delay as PRNR evaluation foundation, and defines PRNR measure as the probability that the actual network delay is no bigger than the required value during network long-term operating under the specified conditions of resource allocation and network operating environment. To obtain PRNR, a novel threshold-optimization-based network traffic model is proposed to model real network traffic. In this model, the actual network traffic is divided into two different parts—a-traffic with burst characteristic, and b-traffic with steady characteristic—according to the optimized threshold value obtained with Particle Swarm Optimization (PSO). Analysis to PRNR is carried out in two different time levels—macro-time level and micro-time level—to avoid the difficulties resulting from the great difference in reliability and performance dimension. In the macro-time level, the number of the operational network workstations varying with random failures is obtained. In the micro-time level, the packet delay is analyzed with the number of the operational workstations as a parameter. Combing the analysis in these two time levels together, the integrated PRNR model is established, and influences of different parameters are analyzed.  相似文献   

5.
In this paper, we consider the Birnbaum–Saunders distribution as a life model to develop various acceptance sampling schemes based on the truncated life tests. We develop the double sampling plan and determine the design parameters satisfying both the producer's and consumer's risks simultaneously for the specified reliability levels in terms of the mean ratio to the specified life. We also propose a group sampling plan and determine the parameters by the above-mentioned two-point method. Tables are constructed for the proposed sampling plans and results are explained with examples.  相似文献   

6.
Most software reliability models use the maximum likelihood method to estimate the parameters of the model. The maximum likelihood method assumes that the inter-failure time distributions contribute equally to the likelihood function. Since software reliability is expected to exhibit growth, a weighted likelihood function that gives higher weights to latter inter-failure times compared to earlier ones is suggested. The accuracy of the predictions obtained using the weighted likelihood method is compared with the predictions obtained when the parameters are estimated by the maximum likelihood method on three real datasets. A simulation study is also conducted.  相似文献   

7.
In this article, we apply the simulated annealing algorithm to determine optimally spaced inspection times for the two-parameter Weibull distribution for any given progressive Type-I grouped censoring plan. We examine how the asymptotic relative efficiencies of the estimates are affected by the position of the monitoring points and the number of monitoring points used. A comparison of different inspection plans is made that will enable the user to select a plan for a specified quality goal. Using the same algorithm, we can also determine an optimal progressive Type-I grouped censoring plan when the inspection times and the expected proportions of total failures in the experiment are pre-fixed. Finally, we discuss the sample size and the acceptance constant of the progressively Type-I grouped censored reliability sampling plan when the optimal inspection times are used.  相似文献   

8.
Sampling plans in which items that are put to test, to collect the life of the items in order to decide upon accepting or rejecting a submitted lot, are called reliability test plans. The basic probability model of the life of the product is specified as the well-known log-logistic distribution with a known shape parameter. For a given producer's risk, sample size, termination number, and waiting time to terminate the test plan are computed. The preferability of the test plan over similar plans existing in the literature is established with respect to cost and time of the experiment.  相似文献   

9.
This paper focusses on computing the Bayesian reliability of components whose performance characteristics (degradation – fatigue and cracks) are observed during a specified period of time. Depending upon the nature of degradation data collected, we fit a monotone increasing or decreasing function for the data. Since the components are supposed to have different lifetimes, the rate of degradation is assumed to be a random variable. At a critical level of degradation, the time to failure distribution is obtained. The exponential and power degradation models are studied and exponential density function is assumed for the random variable representing the rate of degradation. The maximum likelihood estimator and Bayesian estimator of the parameter of exponential density function, predictive distribution, hierarchical Bayes approach and robustness of the posterior mean are presented. The Gibbs sampling algorithm is used to obtain the Bayesian estimates of the parameter. Illustrations are provided for the train wheel degradation data.  相似文献   

10.
In the usual repeated measurements designs (RMDs), the subjects are all observed for the same number of periods and the optimum RMDs require specified numbers of subjects, usually depending on the number of treatments to be used. In practice, it is sometimes not feasible to meet these requirements. To overcome this problem, alternative designs are suggested where any number of available subjects may be used and they may be observed for different periods. These designs are based on suitable serially balanced sequences which are shown to be optimal. Moreover, besides the usual direct and residual effects, the model considered has an extra term due to the interaction effect between them. The recommended designs are universally optimal in a very general class.  相似文献   

11.
In the design, manufacture and maintenance of components, particular attention is paid to component reliability R, the probability that the strength X of a component will exceed a stress Y to which it will be subjected. The problem addressed here is the design (or redesign) of a compoFent to meet a specified reliability R*. While certain characteristics of the random variables X and Y are assumed (symmetry of X about a unique median for example) it is not assumed that the form of the distribution of (X,Y) is known, nor that X and Y are independent. A design is recomnended based on a variation of the stochastic approximation procedure due to Dupac and Kral (1972) which in general estimates recursively the root of a regression curve assuming both independent and dependent regression variables are subject to experimental error.  相似文献   

12.
Abstract

Reliability is a major concern in the process of software development because unreliable software can cause failure in the computer system that can be hazardous. A way to enhance the reliability of software is to detect and remove the faults during the testing phase, which begins with module testing wherein modules are tested independently to remove a substantial number of faults within a limited resource. Therefore, the available resource must be allocated among the modules in such a way that the number of faults is removed as much as possible from each of the modules to achieve higher software reliability. In this article, we discuss the problem of optimal resource allocation of the testing resource for a modular software system, which maximizes the number of faults removed subject to the conditions that the amount of testing-effort is fixed, a certain percentage of faults is to be removed and a desired level of reliability is to be achieved. The problem is formulated as a non linear programming problem (NLPP), which is modeled by the inflection S-shaped software reliability growth models (SRGM) based on a non homogeneous Poisson process (NHPP) which incorporates the exponentiated Weibull (EW) testing-effort functions. A solution procedure is then developed using a dynamic programming technique to solve the NLPP. Furthermore, three special cases of optimum resource allocations are also discussed. Finally, numerical examples using three sets of software failure data are presented to illustrate the procedure developed and to validate the performance of the strategies proposed in this article. Experimental results indicate that the proposed strategies may be helpful to software project managers for making the best decisions in allocating the testing resource. In addition, the results are compared with those of Kapur et al. (2004), Huang and Lyu (2005), and Jha et al. (2010) that are available in the literature to deal the similar problems addressed in this article. It reveals that the proposed dynamic programming method for the testing-resource allocation problem yields a gain in efficiency over other methods.  相似文献   

13.
The non-homogeneous Poisson process (NHPP) model is a very important class of software reliability models and is widely used in software reliability engineering. NHPPs are characterized by their intensity functions. In the literature it is usually assumed that the functional forms of the intensity functions are known and only some parameters in intensity functions are unknown. The parametric statistical methods can then be applied to estimate or to test the unknown reliability models. However, in realistic situations it is often the case that the functional form of the failure intensity is not very well known or is completely unknown. In this case we have to use functional (non-parametric) estimation methods. The non-parametric techniques do not require any preliminary assumption on the software models and then can reduce the parameter modeling bias. The existing non-parametric methods in the statistical methods are usually not applicable to software reliability data. In this paper we construct some non-parametric methods to estimate the failure intensity function of the NHPP model, taking the particularities of the software failure data into consideration.  相似文献   

14.
Many stochastic processes considered in applied probability models, and, in particular, in reliability theory, are processes of the following form: Shocks occur according to some point process, and each shock causes the process to have a random jump. Between shocks the process increases or decreases in some deterministic fashion. In this paper we study processes for which the rate of increase or decrease between shocks depends only on the height of the process. For such processes we find conditions under which the processes can be stochastically compared. We also study hybrid processes in which periods of increase and periods of decrease alternate. A further result yields a stochastic comparison of processes that start with a random jump, rather than processes in which there is at the beginning some random delay time before the first jump.Supported by NSF Grant DMS 9303891.  相似文献   

15.
In this article, we develop acceptance sampling plans when the life test is truncated at a pre-fixed time. The minimum sample size necessary to ensure the specified median life is obtained by assuming that the lifetimes of the test units follow a generalized Birnbaum–Saunders distribution. The operating characteristic values of the sampling plans as well as producer's risk are presented. Two examples are also given to illustrate the procedure developed here, with one of them being based on a real data from software reliability.  相似文献   

16.
Estimation of the lifetime distribution of industrial components and systems yields very important information for manufacturers and consumers. However, obtaining reliability data is time consuming and costly. In this context, degradation tests are a useful alternative approach to lifetime and accelerated life tests in reliability studies. The approximate method is one of the most used techniques for degradation data analysis. It is very simple to understand and easy to implement numerically in any statistical software package. This paper uses time series techniques in order to propose a modified approximate method (MAM). The MAM improves the standard one in two aspects: (1) it uses previous observations in the degradation path as a Markov process for future prediction and (2) it is not necessary to specify a parametric form for the degradation path. Characteristics of interest such as mean or median time to failure and percentiles, among others, are obtained by using the modified method. A simulation study is performed in order to show the improved properties of the modified method over the standard one. Both methods are also used to estimate the failure time distribution of the fatigue-crack-growth data set.  相似文献   

17.
The problems of estimating the reliability function and P=PrX > Y are considered for the generalized life distributions. Uniformly minimum variance unbiased estimators (UMVUES) of the powers of the parameter involved in the probabilistic model and the probability density function (pdf) at a specified point are derived. The UMVUE of the pdf is utilized to obtain the UMVUE of the reliability function and ‘P’. Our method of obtaining these estimators is quite simple than the traditional approaches. A theoretical method of studying the behaviour of the hazard-rate is provided.  相似文献   

18.
We consider a multicomponent load-sharing system in which the failure rate of a given component depends on the set of working components at any given time. Such systems can arise in software reliability models and in multivariate failure-time models in biostatistics, for example. A load-share rule dictates how stress or load is redistributed to the surviving components after a component fails within the system. In this paper, we assume the load share rule is unknown and derive methods for statistical inference on load-share parameters based on maximum likelihood. Components with (individual) constant failure rates are observed in two environments: (1) the system load is distributed evenly among the working components, and (2) we assume only the load for each working component increases when other components in the system fail. Tests for these special load-share models are investigated.  相似文献   

19.
20.
Abstract

In this article, we study the problem of estimating the stress-strength reliability, where the stress and strength variables follow independent exponential distributions with a common location parameter but different scale parameters. All parameters are assumed to be unknown. We derive the MLE, the UMVUE of the reliability parameter. We also derive the Bayes estimators considering conjugate prior distributions for the scale parameters and a dependent prior for the common location parameter. Monte Carlo simulations have been carried out to compare among the proposed estimators with respect to different loss functions.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号