首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Burn-in is a method of eliminating early failures in populations of manufactured items. To burn-in a component or a system means to subject it to a ‘‘simulated operation’’ for some time (prior to its actual field use). Various optimal burn-in problems have been intensively studied in the literature under the assumption of decreasing or bathtub-shaped failure rates. However, most of these studies have been conducted for homogeneous populations. In this paper, we discuss burn-in for heterogeneous populations and develop approaches that minimize the risks of selecting items with large levels of individual failure rates. Using simple examples, we consider the optimal burn-in time, which minimizes these risks.  相似文献   

2.
3.
In this article, a simple repairable system (i.e., a repairable system consisting of one component and one repairman) with delayed repair is studied. Assume that the system after repair is not “as good as new”, and the degeneration of the system is stochastic. Under these assumptions, using the geometric process repair model, we consider a replacement policy T based on system age under which the system is replaced when the system age reaches T. Our problem is to determine an optimal replacement policy T*, such that the average cost rate (i.e., the long-run average cost per unit time) of the system is minimized. The explicit expression of the average cost rate is derived, the corresponding optimal replacement policy T* can be determined by minimizing the average cost rate of the system. Finally, a numerical example is given to illustrate some theoretical results and the model's applicability.  相似文献   

4.
This paper considers the mean residual life in series and parallel systems with independent and identically distributed components and obtains relationships between the change points of the mean residual life of systems and that of their components. Compared with the change point for single components, should it exists, the change point for a series system occurs later. For a parallel system, however, the change point is located before that for the components, if it exists at all. Moreover, for both types of systems, the distance between the change points of the mean residual life for systems and for components increases with the number of components. These results are helpful in the determination of optimal burn-in time and related decision making in reliability analysis.  相似文献   

5.
In the classical approach to qualitative reliability demonstration, system failure probabilities are estimated based on a binomial sample drawn from the running production. In this paper, we show how to take account of additional available sampling information for some or even all subsystems of a current system under test with serial reliability structure. In that connection, we present two approaches, a frequentist and a Bayesian one, for assessing an upper bound for the failure probability of serial systems under binomial subsystem data. In the frequentist approach, we introduce (i) a new way of deriving the probability distribution for the number of system failures, which might be randomly assembled from the failed subsystems and (ii) a more accurate estimator for the Clopper–Pearson upper bound using a beta mixture distribution. In the Bayesian approach, however, we infer the posterior distribution for the system failure probability on the basis of the system/subsystem testing results and a prior distribution for the subsystem failure probabilities. We propose three different prior distributions and compare their performances in the context of high reliability testing. Finally, we apply the proposed methods to reduce the efforts of semiconductor burn-in studies by considering synergies such as comparable chip layers, among different chip technologies.  相似文献   

6.
In this article, we consider the situation under a life test, in which the failure time of the test units are not related deterministically to an observable stochastic time varying covariate. In such a case, the joint distribution of failure time and a marker value would be useful for modeling the step stress life test. The problem of accelerating such an experiment is considered as the main aim of this article. We present a step stress accelerated model based on a bivariate Wiener process with one component as the latent (unobservable) degradation process, which determines the failure times and the other as a marker process, the degradation values of which are recorded at times of failure. Parametric inference based on the proposed model is discussed and the optimization procedure for obtaining the optimal time for changing the stress level is presented. The optimization criterion is to minimize the approximate variance of the maximum likelihood estimator of a percentile of the products’ lifetime distribution.  相似文献   

7.
Determination of preventive maintenance is an important issue for systems under degradation. A typical maintenance policy calls for complete preventive repair actions at pre-scheduled times and minimal repair actions whenever a failure occurs. Under minimal repair, failures are modeled according to a non homogeneous Poisson process. A perfect preventive maintenance restores the system to the as good as new condition. The motivation for this article was a maintenance data set related to power switch disconnectors. Two different types of failures could be observed for these systems according to their causes. The major difference between these types of failures is their costs. Assuming that the system will be in operation for an infinite time, we find the expected cost per unit of time for each preventive maintenance policy and hence obtain the optimal strategy as a function of the processes intensities. Assuming a parametrical form for the intensity function, large sample estimates for the optimal maintenance check points are obtained and discussed.  相似文献   

8.
Age and block replacement policies are commonly used in order to reduce the number of in-service failures when the systems are functioning indefinitely. In reliability theory, the lifetime of a system can be modeled by means of the NBUC aging class that is characterized throughout comparisons of the residual lives in the sense of the icx order. The purpose of this paper is to establish stochastic comparisons between the age (block) replacement policy and a renewal process with no planned replacements when the lifetime of the unit is NBUC. Supported by Ministerio de Ciencia y Tecnología under grant BFM2000-0362  相似文献   

9.
Abstract

This paper considers the optimization problems for a consecutive-2-out-of-n:G system where n is considered to be fixed or random. When the number of components is constant, the optimal number of components and the optimal replacement time are discussed by minimizing the expected cost rates. Furthermore, we focus on the above discussions again when n is a random variable. We give an approximate value of MTTF and propose the preventive replacement policy, respectively.  相似文献   

10.
Step-stress accelerated degradation test (SSADT) plays an important role in assessing the lifetime distribution of highly reliable products under normal operating conditions when there are not enough test units available for testing purposes. Recently, the optimal SSADT plans are presented based on an underlying assumption that there is only one performance characteristic. However, many highly reliable products usually have complex structure, with their reliability being evaluated by two or more performance characteristics. At the same time, the degradation of these performance characteristics would be always positive and strictly increasing. In such a case, the gamma process is usually considered as a degradation process due to its independent and nonnegative increments properties. Therefore, it is of great interest to design an efficient SSADT plan for the products with multiple performance characteristics based on gamma processes. In this work, we first introduce reliability model of the degradation products with two performance characteristics based on gamma processes, and then present the corresponding SSADT model. Next, under the constraint of total experimental cost, the optimal settings such as sample size, measurement times, and measurement frequency are obtained by minimizing the asymptotic variance of the estimated 100 qth percentile of the product’s lifetime distribution. Finally, a numerical example is given to illustrate the proposed procedure.  相似文献   

11.
The present paper is concerned with statistical models for the dependence of survival time or time to occurrence of an event, such as time to tumor, on a vector X of covariates or prognostic variables such as age, sex, blood pressure, length of exposure to a toxic material, etc., measured on a group of individuals in biomedical investigations. It is assumed that the covariates influence the distribution of time to tumor only through a linear predictor μ =βX.

The object of our paper is to investigate the effect due to the covariates on the Life Expectancy and the Percentile Residual Life (PRL) function of a family of organisms under the proportional hazards and the accelerated life models. The key result is that the families of survival distributions under these models have the 'setting the clock back to zero' property if the family of baseline survival distributions does. This property is a generalization of the lack of memory property of the exponential distribution. Simple examples of the members of this family are the linear hazard exponential, Pareto and Gompertz life distributions.

As a simple application of the main results obtained in the present paper, we have considered a stochastic survival model recently proposed by Chiang and Conforti (1989) for the time-to-tumor distribution in the context of a large-scale serial sacrifice experiment by the National Center of Toxicological Research (NCTR). This involves some mice that were fed 2-AAF from infancy and those that developed bladder and/or liver neoplasms, see Farmer et al (1980). It is shown that their stochastic model for tumor incidence intensity at time t leads to a family of survival models that has the setting the clock back to zero property. The survival functions and the effect of the vector X of covariates on the PRL and the tumor-free life expectancies are evaluated for the proportional hazards and accelerated life models.  相似文献   

12.
In this research, a novel optimal single machine replacement policy in finite stages based on the rate of producing defective items is proposed. The primary objective of this paper is to determine the optimal decision using a Markov decision process to maximize the total profit associated with a machine maintenance policy. It is assumed that a customer order is due at the end of a finite horizon and the machine deteriorates over time when operating. Repair takes time but brings the machine to a better state. Production and repair costs are considered in the model and revenue is earned for each good item produced by the end of the horizon, there is also a cost for the machine condition at the end of the horizon. In each period, we must decide whether to produce, repair, or do nothing, with the objective of maximizing expected profit during the horizon.  相似文献   

13.
This article proposes an extended geometric process repair model to generalize the geometric process repair model and studies a repair-replacement problem for a simple repairable system with delayed repair, based on the failure number N of the system under the new model. An optimal replacement policy N* is determined by maximizing the average reward rate of the system. The explicit expression of the average reward rate is derived, and the uniqueness of the optimal replacement policy N* is also proved. Finally, a numerical example is given to illustrate some theoretical results and the model applicability.  相似文献   

14.
This article proposes an adaptive sequential preventive maintenance (PM) policy for which an improvement factor is newly introduced to measure the PM effect at each PM. For this model, the PM actions are conducted at different time intervals so that an adaptive method needs to be utilized to determine the optimal PM times minimizing the expected cost rate per unit time. At each PM, the hazard rate is reduced by an amount affected by the improvement factor which depends on the number of PM's preceding the current one. We derive mathematical formulas to evaluate the expected cost rate per unit time by incorporating the PM cost, repair cost, and replacement cost. Assuming that the failure times follow a Weibull distribution, we propose an optimal sequential PM policy by minimizing the expected cost rate. Furthermore, we consider Bayesian aspects for the sequential PM policy to discuss its optimality. The effect of some parameters and the functional forms of improvement factor on the optimal PM policy is measured numerically by sensibility analysis and some numerical examples are presented for illustrative purposes.  相似文献   

15.
Based on a generalized cumulative damage approach with a stochastic process describing degradation, new accelerated life test models are presented in which both observed failures and degradation measures can be considered for parametric inference of system lifetime. Incorporating an accelerated test variable, we provide several new accelerated degradation models for failure based on the geometric Brownian motion or gamma process. It is shown that in most cases, our models for failure can be approximated closely by accelerated test versions of Birnbaum–Saunders and inverse Gaussian distributions. Estimation of model parameters and a model selection procedure are discussed, and two illustrative examples using real data for carbon-film resistors and fatigue crack size are presented.  相似文献   

16.
If at least one out of two serial machines that produce a specific product in manufacturing environments malfunctions, there will be non conforming items produced. Determining the optimal time of the machines' maintenance is the one of major concerns. While a convenient common practice for this kind of problem is to fit a single probability distribution to the combined defect data, it does not adequately capture the fact that there are two different underlying causes of failures. A better approach is to view the defects as arising from a mixture population: one due to the first machine failures and the other due to the second one. In this article, a mixture model along with both Bayesian inference and stochastic dynamic programming approaches are used to find the multi-stage optimal replacement strategy. Using the posterior probability of the machines to be in state λ1, λ2 (the failure rates of defective items produced by machine 1 and 2, respectively), we first formulate the problem as a stochastic dynamic programming model. Then, we derive some properties for the optimal value of the objective function and propose a solution algorithm. At the end, the application of the proposed methodology is demonstrated by a numerical example and an error analysis is performed to evaluate the performances of the proposed procedure. The results of this analysis show that the proposed method performs satisfactorily when a different number of observations on the times between productions of defective products is available.  相似文献   

17.
As the Gibbs sampler has become one of the standard tools in computing, the practice of burn-in is almost the default option. Because it takes a certain number of iterations for the initial distribution to reach stationarity, supporters of burn-in will throw away an initial segment of the samples and argue that such a practice ensures unbiasedness. Running time analysis studies the question of how many samples to be thrown away. Basically, it equates the number of iterations to stationarity with the number of initial samples to be discarded. However, many practitioners have found that burn-in wastes potentially useful samples and the practice is inefficient, and thus unnecessary. For the example considered, a single chain without burn-in offers both efficiency and accuracy superior to multiple chains with burn-in. We show that the Gibbs sampler uses odds to generate samples. Because the correct odds are used from the onset of the iterative process, the observations generated by the Gibbs sampler are identically distributed as the target distribution; thus throwing away those valid samples is wasteful. When the chain of distributions and the trajectory (sample path) of the chain are considered based on their separate merits, the disagreement can be settled. We advocate carefully choosing the initial state, but without burn-in to quicken the formation of the stationary distribution.  相似文献   

18.
In this paper we consider the problem of determining the optimum number of repairable and replaceable components to maximize a system's reliability when both, the cost of repairing the components and the cost of replacement of components by new ones, are random. We formulate it as a problem of non-linear stochastic programming. The solution is obtained through Chance Constrained programming. We also consider the problem of finding the optimal maintenance cost for a given reliability requirement of the system. The solution is then obtained by using Modified E-model. A numerical example is solved for both the formulations.  相似文献   

19.
From the economical viewpoint in reliability theory, this paper addresses a scheduling replacement problem for a single operating system which works at random times for multiple jobs. The system is subject to stochastic failure which results the imperfect maintenance activity based on some random failure mechanism: minimal repair due to type-I (repairable) failure, or corrective replacement due to type-II (non-repairable) failure. Three scheduling models for the system with multiple jobs are considered: a single work, N tandem works, and N parallel works. To control the deterioration process, the preventive replacement is planned to undergo at a scheduling time T or the job's completion time of for each model. The objective is to determine the optimal scheduling parameters (T* or N*) that minimizes the mean cost rate function in a finite time horizon for each model. A numerical example is provided to illustrate the proposed analytical model. Because the framework and analysis are general, the proposed models extend several existing results.  相似文献   

20.
In this article, the concept of imperfect preventive maintenance is discussed and an age maintenance policy is developed based on the cumulative damage model for a used system with initial variable damage. The deterioration of the system is assumed to suffer the non-homogeneous Poisson shocks which can be divided into two types with stochastic probability: Type-I shock (minor) yields a random amount of additive damage of the system, or Type-II shock (catastrophic) causes the system to fail. An age preventive maintenance policy T is presented in which the system undergoes preventive maintenance at a scheduled lifetime T, or corrective maintenance at first Type-II shock and the total damage exceeds a threshold level, whichever occurs first. The objective is to determine the optimal preventive maintenance schedule such that the expected cost rate is minimized. The optimal solution is derived analytically and discussed numerically.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号