首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
For reliability-critical and expensive products, it is necessary to estimate their residual lives based on available information, such as the degradation data, so that proper maintenance actions can be arranged to reduce or even avoid the occurrence of failures. In this work, by assuming that the product-to-product variability of the degradation is characterized by a skew-normal distribution, a generalized Wiener process-based degradation model is developed. Following that, the issue of residual life (RL) estimation of the target product is addressed in detail. The proposed degradation model provides greater flexibility to capture a variety of degradation processes, since several commonly used Wiener process-based degradation models can be seen as special cases. Through the EM algorism, the population-based degradation information is used to estimate the parameters of the model. Whenever new degradation measurement information of the target product becomes available, the degradation model is first updated based on the Bayesian method. In this way, the RL of the target product can be estimated in an adaptive manner. Finally, the developed methodology is demonstrated by a simulation study.  相似文献   

2.
Due to the growing importance in maintenance scheduling, the issue of residual life (RL) estimation for some high reliable products based on degradation data has been studied quite extensively. However, most of the existing work only deals with one-dimensional degradation data, which may not be realistic in some cases. Here, an adaptive method of RL estimation is developed based on two-dimensional degradation data. It is assumed that a product has two performance characteristics (PCs) and that the degradation of each PC over time is governed by a non-stationary gamma degradation process. From a practical consideration, it is further assumed that these two PCs are dependent and that their dependency can be characterized by a copula function. As the likelihood function in such a situation is complicated and computationally quite intensive, a two-stage method is used to estimate the unknown parameters of the model. Once new degradation information of the product being monitored becomes available, random effects are first updated by using the Bayesian method. Following that, the RL at current time is estimated accordingly. As the degradation data information accumulates, the RL can be re-estimated in an adaptive manner. Finally, a numerical example about fatigue cracks is presented in order to illustrate the proposed model and the developed inferential method.  相似文献   

3.
The issue of residual life (RL) estimation plays an important role for products while they are in use, especially for expensive and reliability-critical products. For many products, they may have two or more performance characteristics (PCs). Here, an adaptive method of RL estimation based on bivariate Wiener degradation process with time-scale transformations is presented. It is assumed that a product has two PCs, and that each PC is governed by a Wiener process with a time-scale transformation. The dependency of PCs is characterized by the Frank copula function. Parameters are estimated by using the Bayesian Markov chain Monte Carlo method. Once new degradation information is available, the RL is re-estimated in an adaptive manner. A numerical example about fatigue cracks is given to demonstrate the usefulness and validity of the proposed method.  相似文献   

4.
To assess the reliability of highly reliable products that have two or more performance characteristics (PCs) in an accurate manner, relations between the PCs should be taken duly into account. If they are not independent, it would then become important to describe the dependence of the PCs. For many products, the constant-stress degradation test cannot provide sufficient data for reliability evaluation and for this reason, accelerated degradation test is usually performed. In this article, we assume that a product has two PCs and that the PCs are governed by a Wiener process with a time scale transformation, and the relationship between the PCs is described by the Frank copula function. The copula parameter is dependent on stress and assumed to be a function of stress level that can be described by a logistic function. Based on these assumptions, a bivariate constant-stress accelerated degradation model is proposed here. The direct likelihood estimation of parameters of such a model becomes analytically intractable, and so the Bayesian Markov chain Monte Carlo (MCMC) method is developed here for this model for obtaining the maximum likelihood estimates (MLEs) efficiently. For an illustration of the proposed model and the method of inference, a simulated example is presented along with the associated computational results.  相似文献   

5.
Residual life (RL) estimation plays an important role in prognostics and health management. In operating conditions, components usually experience stresses continuously varying over time, which have an impact on the degradation processes. This paper investigates a Wiener process model to track and predict the RL under time-varying conditions. The item-to-item variation is captured by the drift parameter and the degradation characteristic of the whole population is described by the diffusion parameter. The bootstrap method and Bayesian theorem are employed to estimate and update the distribution parameters of ‘a’ and ‘b’, which are the coefficients of the linear drifting process in the degradation model. Once new degradation information becomes available, the RL distributions considering the future operating condition are derived. The proposed method is tested on Lithium-ion battery devices under three levels of charging/discharging rates. The results are further validated by a simulation method.  相似文献   

6.
For some highly reliable products, degradation data have been studied quite extensively to evaluate their reliability characteristics. However, the accuracy of evaluation results depends strongly on the suitability of the proposed degradation model for capturing the degradation over time. If the degradation model is mis-specified, it may result in inaccurate results. In this work, we focus on the issue of model mis-specification between nonlinear Wiener process-based degradation models in which both the product-to-product variability and the temporal uncertainty of the degradation can be considered simultaneously with the nonlinearity in degradation paths. Specifically, a generalized Wiener process-based degradation model is wrongly fitted by its two limiting cases. The effects of model mis-specification in such situations on the MTTF (mean-time-to-failure) of the product are measured with the relative bias and the relative variability. Results from a numerical example concerning fatigue cracks show that the effect of mis-specification is serious under some parameter settings, i.e., the relative bias departs from 0, and the relative variability significantly departs from 1, if the generalized Wiener degradation process is wrongly assumed to be its limiting cases.  相似文献   

7.
In the method of paired comparisons (PCs), treatments are compared on the basis of qualitative characteristics they possess, in the light of their sensory evaluations made by judges. However, there may emerge the situations where in addition to qualitative merits/worths, judges may assign quantitative weights to reflect/specify the relative importance of the treatments. In this study, an attempt is made to reconcile the qualitative and the quantitative PCs through assigning quantitative weights to treatments having qualitative merits using/extending the Bradley–Terry (BT) model. Behaviors of the existing BT model and the proposed weighted BT model are studied through the test of goodness-of-fit. Experimental and simulated data sets are used for illustration.  相似文献   

8.
In this article, we present a framework of estimating patterned covariance of interest in the multivariate linear models. The main idea in it is to estimate a patterned covariance by minimizing a trace distance function between outer product of residuals and its expected value. The proposed framework can provide us explicit estimators, called outer product least-squares estimators, for parameters in the patterned covariance of the multivariate linear model without or with restrictions on regression coefficients. The outer product least-squares estimators enjoy the desired properties in finite and large samples, including unbiasedness, invariance, consistency and asymptotic normality. We still apply the framework to three special situations where their patterned covariances are the uniform correlation, a generalized uniform correlation and a general q-dependence structure, respectively. Simulation studies for three special cases illustrate that the proposed method is a competent alternative of the maximum likelihood method in finite size samples.  相似文献   

9.
Environmental variables have an important effect on the reliability of many products such as coatings and polymeric composites. Long-term prediction of the performance or service life of such products must take into account the probabilistic/stochastic nature of the outdoor weather. In this article, we propose a time series modeling procedure to model the time series data of daily accumulated degradation. Daily accumulated degradation is the total amount of degradation accrued within one day and can be obtained by using a degradation rate model for the product and the weather data. The fitted model of the time series can then be used to estimate the future distribution of cumulative degradation over a period of time, and to compute reliability measures such as the probability of failure. The modeling technique and estimation method are illustrated using the degradation of a solar reflector material. We also provide a method to construct approximate confidence intervals for the probability of failure.  相似文献   

10.
Loss functions express the loss to society, incurred through the use of a product, in monetary units. Underlying this concept is the notion that any deviation from target of any product characteristic implies a degradation in the product performance and hence a loss. Spiring (1993), in response to criticisms of the quadratic loss function, developed the reflected normal loss function, which is based on the normal density function. We give some modifications of these loss functions to simplify their application and provide a framework for the reflected normal loss function that accomodates a broader class of symmetric loss situations. These modifications also facilitate the unification of both of these loss functions and their comparison through expected loss. Finally, we give a simple method for determing the parameters of the modified reflected normal loss function based on loss information for multiple values of the product characteristic, and an example to illustrate the flexibility of the proposed model and the determination of its parameters.  相似文献   

11.
Interpretation of principal components is difficult due to their weights (loadings, coefficients) being of various sizes. Whereas very small weights or very large weights can give clear indication of the importance of particular variables, weights that are neither large nor small (‘grey area’ weights) are problematical. This is a particular problem in the fast moving goods industries where a lot of multivariate panel data are collected on products. These panel data are subjected to univariate analyses and multivariate analyses where principal components (PCs) are key to the interpretation of the data. Several authors have suggested alternatives to PCs, seeking simplified components such as sparse PCs. Here components, termed simple components (SCs), are sought in conjunction with Thurstonian criteria that a component should have only a few variables highly weighted on it and each variable should be weighted heavily on just a few components. An algorithm is presented that finds SCs efficiently. Simple components are found for panel data consisting of the responses to a questionnaire on efficacy and other features of deodorants. It is shown that five SCs can explain an amount of variation within the data comparable to that explained by the PCs, but with easier interpretation.  相似文献   

12.
A general theory is presented for residuals from the general linear model with correlated errors. It is demonstrated that there are two fundamental types of residual associated with this model, referred to here as the marginal and the conditional residual. These measure respectively the distance to the global aspects of the model as represented by the expected value and the local aspects as represented by the conditional expected value. These residuals may be multivariate. Some important dualities are developed which have simple implications for diagnostics. The results are illustrated by reference to model diagnostics in time series and in classical multivariate analysis with independent cases.  相似文献   

13.
For multivariate normal data with non-monotone (i.e. arbitrary) missing data patterns, lattice conditional independence (LCI) models determined by the observed data patterns can be used to obtain closed-form MLEs (Andersson and Perlman, 1991, 1993). In this paper, three procedures — LCI models, the EM algorithm, and the complete-data method — are compared by means of a Monte Carlo experiment. When the LCI model is accepted by the LR test, the LCI estimate is more efficient than those based on the EM algorithm and the complete-data method. When the LCI model is not accepted, the LCI estimate may lose efficiency but still may be more efficient than the EM estimate if the observed data is sparse. When the LCI model appears too restrictive, it may be possible to obtain a less restrictive LCI model by.discarding only a small portion of the incomplete observations. LCI models appear to be especially useful when the observed data is sparse, even in cases where the suitability of the LCI model is uncertain.  相似文献   

14.
Multivariate mixture regression models can be used to investigate the relationships between two or more response variables and a set of predictor variables by taking into consideration unobserved population heterogeneity. It is common to take multivariate normal distributions as mixing components, but this mixing model is sensitive to heavy-tailed errors and outliers. Although normal mixture models can approximate any distribution in principle, the number of components needed to account for heavy-tailed distributions can be very large. Mixture regression models based on the multivariate t distributions can be considered as a robust alternative approach. Missing data are inevitable in many situations and parameter estimates could be biased if the missing values are not handled properly. In this paper, we propose a multivariate t mixture regression model with missing information to model heterogeneity in regression function in the presence of outliers and missing values. Along with the robust parameter estimation, our proposed method can be used for (i) visualization of the partial correlation between response variables across latent classes and heterogeneous regressions, and (ii) outlier detection and robust clustering even under the presence of missing values. We also propose a multivariate t mixture regression model using MM-estimation with missing information that is robust to high-leverage outliers. The proposed methodologies are illustrated through simulation studies and real data analysis.  相似文献   

15.
In testing product reliability, there is often a critical cutoff level that determines whether a specimen is classified as failed. One consequence is that the number of degradation data collected varies from specimen to specimen. The information of random sample size should be included in the model, and our study shows that it can be influential in estimating model parameters. Two-stage least squares (LS) and maximum modified likelihood (MML) estimation, which both assume fixed sample sizes, are commonly used for estimating parameters in the repeated measurements models typically applied to degradation data. However, the LS estimate is not consistent in the case of random sample sizes. This article derives the likelihood for the random sample size model and suggests using maximum likelihood (ML) for parameter estimation. Our simulation studies show that ML estimates have smaller biases and variances compared to the LS and MML estimates. All estimation methods can be greatly improved if the number of specimens increases from 5 to 10. A data set from a semiconductor application is used to illustrate our methods.  相似文献   

16.
It is suggested that inference under the proportional hazard model can be carried out by programs for exact inference under the logistic regression model. Advantages of such inference is that software is available and that multivariate models can be addressed. The method has been evaluated by means of coverage and power calculations in certain situations. In all situations coverage was above the nominal level, but on the other hand rather conservative. A different type of exact inference is developed under Type II censoring. Inference was then less conservative, however there are limitations with respect to censoring mechanism, multivariate generalizations and software is not available. This method also requires extensive computational power. Performance of large sample Wald, score and likelihood inference was also considered. Large sample methods works remarkably well with small data sets, but inference by score statistics seems to be the best choice. There seems to be some problems with likelihood ratio inference that may originate from how this method works with infinite estimates of the regression parameter. Inference by Wald statistics can be quite conservative with very small data sets.  相似文献   

17.
Errors in measurement frequently occur in observing responses. If case–control data are based on certain reported responses, which may not be the true responses, then we have contaminated case–control data. In this paper, we first show that the ordinary logistic regression analysis based on contaminated case–control data can lead to very serious biased conclusions. This can be concluded from the results of a theoretical argument, one example, and two simulation studies. We next derive the semiparametric maximum likelihood estimate (MLE) of the risk parameter of a logistic regression model when there is a validation subsample. The asymptotic normality of the semiparametric MLE will be shown along with consistent estimate of asymptotic variance. Our example and two simulation studies show these estimates to have reasonable performance under finite sample situations.  相似文献   

18.
A new estimation method for the dimension of a regression at the outset of an analysis is proposed. A linear subspace spanned by projections of the regressor vector X , which contains part or all of the modelling information for the regression of a vector Y on X , and its dimension are estimated via the means of parametric inverse regression. Smooth parametric curves are fitted to the p inverse regressions via a multivariate linear model. No restrictions are placed on the distribution of the regressors. The estimate of the dimension of the regression is based on optimal estimation procedures. A simulation study shows the method to be more powerful than sliced inverse regression in some situations.  相似文献   

19.
Summary.  We propose covariance-regularized regression, a family of methods for prediction in high dimensional settings that uses a shrunken estimate of the inverse covariance matrix of the features to achieve superior prediction. An estimate of the inverse covariance matrix is obtained by maximizing the log-likelihood of the data, under a multivariate normal model, subject to a penalty; it is then used to estimate coefficients for the regression of the response onto the features. We show that ridge regression, the lasso and the elastic net are special cases of covariance-regularized regression, and we demonstrate that certain previously unexplored forms of covariance-regularized regression can outperform existing methods in a range of situations. The covariance-regularized regression framework is extended to generalized linear models and linear discriminant analysis, and is used to analyse gene expression data sets with multiple class and survival outcomes.  相似文献   

20.
Summary.  We consider an extension of conventional univariate Kaplan–Meier-type estimators for the hazard rate and the survivor function to multivariate censored data with a censored random regressor. It is an Akritas-type estimator which adapts the non-parametric conditional hazard rate estimator of Beran to more typical data situations in applied analysis. We show with simulations that the estimator has nice finite sample properties and our implementation appears to be fast. As an application we estimate non-parametric conditional quantile functions with German administrative unemployment duration data.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号