首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Capacitance is a critical performance characteristic of high-voltage-pulse capacitor which is used to store and discharge electrical energy rapidly. The capacitors usually are stored for a long period of time before put into use. Experimental result and engineering experience indicate that the capacitance increases with storage time and will eventually exceed the failure threshold, which means that the capacitor may fail during storage. This is a typical mode of degradation failure for long storage products. Further, the capacitance degradation path can be extrapolated in several stages based on the shifting characteristics. That is, the capacitance increases slowly or fluctuates in the initial storage stage that lasts about three months. Then it increases sharply in the middle stage which lasts about four months. After the two stages, the capacitor enters into the third stage in which capacitance increases constantly. This degradation phenomenon motivates us to study the storage life prediction method based on multi-phase degradation path model. The storage performance degradation mechanism of high-voltage-pulse capacitor was investigated, which provides the physical basis for multi-phase Wiener degradation model. Identification procedure for the transition points in the degradation path was proposed using maximum likelihood principle (MLP). The result of Kruskal-Wallis test which is the method to test whether two populations are consistent or not in statistics showed that the transition points are statistically effective. Other parameters in the multi-phase degradation model are estimated with maximum likelihood estimation (MLE) after the transition points have been specified. The multi-phase Inverse Gaussian (IG) distribution for storage life was deduced for the capacitor, and the point and interval estimation procedure for reliable storage life are constructed with bootstrap method. The efficiency and effectiveness of the proposed multi-phase degradation model is compared with storage life prediction under single-phase condition.  相似文献   

2.
For decision purpose, one of the commonly used statistical applications is the comparison of two or more objects or characteristics. Sometimes, it is not possible to compare the objects at a time or when the number of objects under study is large and the differences between the objects become small, then a useful way is to compare them in pairwise manner. Because of its practical nature, the fields in which paired comparison techniques are being used are numerous. Many Bayesian statisticians have focused their attention on the practical and usable paired comparison technique and have successfully performed the Bayesian study of many of the paired comparison models. In the current study, analysis of the amended Davidson model (ADM) which has been extended after incorporating the order effect parameter is narrated. For this intention, both the informative and non informative priors are used. The said model is studied for the case of four treatments which are compared pairwise.  相似文献   

3.
Non parametric approaches to classification have gained significant attention in the last two decades. In this paper, we propose a classification methodology based on the multivariate rank functions and show that it is a Bayes rule for spherically symmetric distributions with a location shift. We show that a rank-based classifier is equivalent to optimal Bayes rule under suitable conditions. We also present an affine invariant version of the classifier. To accommodate different covariance structures, we construct a classifier based on the central rank region. Asymptotic properties of these classification methods are studied. We illustrate the performance of our proposed methods in comparison to some other depth-based classifiers using simulated and real data sets.  相似文献   

4.
In one-way ANOVA, most of the pairwise multiple comparison procedures depend on normality assumption of errors. In practice, errors have non-normal distributions so frequently. Therefore, it is very important to develop robust estimators of location and the associated variance under non-normality. In this paper, we consider the estimation of one-way ANOVA model parameters to make pairwise multiple comparisons under short-tailed symmetric (STS) distribution. The classical least squares method is neither efficient nor robust and maximum likelihood estimation technique is problematic in this situation. Modified maximum likelihood (MML) estimation technique gives the opportunity to estimate model parameters in closed forms under non-normal distributions. Hence, the use of MML estimators in the test statistic is proposed for pairwise multiple comparisons under STS distribution. The efficiency and power comparisons of the test statistic based on sample mean, trimmed mean, wave and MML estimators are given and the robustness of the test obtained using these estimators under plausible alternatives and inlier model are examined. It is demonstrated that the test statistic based on MML estimators is efficient and robust and the corresponding test is more powerful and having smallest Type I error.  相似文献   

5.
For some operable products with critical reliability constraints, it is important to estimate accurately their residual lives so that maintenance actions can be arranged suitably and efficiently. In the literature, most publications have dealt with this issue by only considering one-dimensional degradation data. However, this may be not reasonable in situations wherein a product may have two or more performance characteristics (PCs). In such situations, multi-dimensional degradation data should be taken into account. Here, for the target product with multivariate PCs, methods of residual life (RL) estimation are developed. This is done with the assumption that the degradation of PCs over time is governed by a multivariate Wiener process with nonlinear drifts. Both the population-based degradation information and the degradation history of the target product up-to-date are combined to estimate the RL of the product. Specifically, the population-based degradation information is first used to obtain the estimates of the unknown parameters of the model through the EM algorithm. Then, the degradation history of the target product is adopted to update the degradation model, based on which the RL is estimated accordingly. To illustrate the validity and the usefulness of the proposed method, a numerical example about fatigue cracks is finally presented and analysed.  相似文献   

6.
The issue of residual life (RL) estimation plays an important role for products while they are in use, especially for expensive and reliability-critical products. For many products, they may have two or more performance characteristics (PCs). Here, an adaptive method of RL estimation based on bivariate Wiener degradation process with time-scale transformations is presented. It is assumed that a product has two PCs, and that each PC is governed by a Wiener process with a time-scale transformation. The dependency of PCs is characterized by the Frank copula function. Parameters are estimated by using the Bayesian Markov chain Monte Carlo method. Once new degradation information is available, the RL is re-estimated in an adaptive manner. A numerical example about fatigue cracks is given to demonstrate the usefulness and validity of the proposed method.  相似文献   

7.
The K-means algorithm and the normal mixture model method are two common clustering methods. The K-means algorithm is a popular heuristic approach which gives reasonable clustering results if the component clusters are ball-shaped. Currently, there are no analytical results for this algorithm if the component distributions deviate from the ball-shape. This paper analytically studies how the K-means algorithm changes its classification rule as the normal component distributions become more elongated under the homoscedastic assumption and compares this rule with that of the Bayes rule from the mixture model method. We show that the classification rules of both methods are linear, but the slopes of the two classification lines change in the opposite direction as the component distributions become more elongated. The classification performance of the K-means algorithm is then compared to that of the mixture model method via simulation. The comparison, which is limited to two clusters, shows that the K-means algorithm provides poor classification performances consistently as the component distributions become more elongated while the mixture model method can potentially, but not necessarily, take advantage of this change and provide a much better classification performance.  相似文献   

8.
Birnbaum–Saunders fatigue life distribution is an important failure model in the probability physical methods. It is more suitable for describing the life rules of fatigue failure products than common life distributions such as Weibull distribution and lognormal distribution. Besides, it is mainly applied to analytical research about fatigue failure and degradation failure of electronic product performance. The characteristic properties such as numerical characteristics and image features of density function and failure rate function are studied for generalized BS fatigue life distribution GBS(α, β, m) in this paper. Then the point estimates and approximate interval estimates of parameters are proposed for generalized BS fatigue life distribution GBS(α, β, m), and the precision of estimates are investigated by Monte Carlo simulations. Finally, when the scale parameter satisfies inverse power law model, the failure distribution model is given for the products of two-parameter BS fatigue life distribution BS(α, β) under progressive stress accelerated life test according to the time conversion idea of famous Nelson assumption, and then the points estimates of parameters are given.  相似文献   

9.
The normal linear discriminant rule (NLDR) and the normal quadratic discriminant rule (NQDR) are popular classifiers when working with normal populations. Several papers in the literature have been devoted to a comparison of these rules with respect to classification performance. An aspect which has, however, not received any attention is the effect of an initial variable selection step on the relative performance of these classification rules. Cross model validation variabie selection has been found to perform well in the linear case, and can be extended to the quadratic case. We report the results of a simulation study comparing the NLDR and the NQDR with respect to the post variable selection classification performance. It is of interest that the NQDR generally benefits from an initial variable selection step. We also comment briefly on the problem of estimating the post selection error rates of the two rules.  相似文献   

10.
In our previous work, we developed a new distance function based on a derivative and showed that our algorithm is effective. In contrast to well-known measures from the literature, our approach considers the general shape of a time series rather than standard distance of function (value) comparison. The new distance was used in classification with the nearest neighbor rule. Now we improve on our previous technique by adding the second derivative. In order to provide a comprehensive comparison, we conducted a set of experiments, testing effectiveness on 47 time series datasets from a wide variety of application domains. Our experiments show that this new method provides a significantly more accurate classification on the examined datasets.  相似文献   

11.
Immuno‐oncology has emerged as an exciting new approach to cancer treatment. Common immunotherapy approaches include cancer vaccine, effector cell therapy, and T‐cell–stimulating antibody. Checkpoint inhibitors such as cytotoxic T lymphocyte–associated antigen 4 and programmed death‐1/L1 antagonists have shown promising results in multiple indications in solid tumors and hematology. However, the mechanisms of action of these novel drugs pose unique statistical challenges in the accurate evaluation of clinical safety and efficacy, including late‐onset toxicity, dose optimization, evaluation of combination agents, pseudoprogression, and delayed and lasting clinical activity. Traditional statistical methods may not be the most accurate or efficient. It is highly desirable to develop the most suitable statistical methodologies and tools to efficiently investigate cancer immunotherapies. In this paper, we summarize these issues and discuss alternative methods to meet the challenges in the clinical development of these novel agents. For safety evaluation and dose‐finding trials, we recommend the use of a time‐to‐event model‐based design to handle late toxicities, a simple 3‐step procedure for dose optimization, and flexible rule‐based or model‐based designs for combination agents. For efficacy evaluation, we discuss alternative endpoints/designs/tests including the time‐specific probability endpoint, the restricted mean survival time, the generalized pairwise comparison method, the immune‐related response criteria, and the weighted log‐rank or weighted Kaplan‐Meier test. The benefits and limitations of these methods are discussed, and some recommendations are provided for applied researchers to implement these methods in clinical practice.  相似文献   

12.
Multiple biomarkers are frequently observed or collected for detecting or understanding a disease. The research interest of this article is to extend tools of receiver operating characteristic (ROC) analysis from univariate marker setting to multivariate marker setting for evaluating predictive accuracy of biomarkers using a tree-based classification rule. Using an arbitrarily combined and-or classifier, an ROC function together with a weighted ROC function (WROC) and their conjugate counterparts are introduced for examining the performance of multivariate markers. Specific features of the ROC and WROC functions and other related statistics are discussed in comparison with those familiar properties for univariate marker. Nonparametric methods are developed for estimating the ROC and WROC functions, and area under curve and concordance probability. With emphasis on population average performance of markers, the proposed procedures and inferential results are useful for evaluating marker predictability based on multivariate marker measurements with different choices of markers, and for evaluating different and-or combinations in classifiers.  相似文献   

13.
A mean residual life function (MRLF) is the remaining life expectancy of a subject who has survived to a certain time point. In the presence of covariates, regression models are needed to study the association between the MRLFs and covariates. If the survival time tends to be too long or the tail is not observed, the restricted mean residual life must be considered. In this paper, we propose the proportional restricted mean residual life model for fitting survival data under right censoring. For inference on the model parameters, martingale estimating equations are developed, and the asymptotic properties of the proposed estimators are established. In addition, a class of goodness-of-fit test is presented to assess the adequacy of the model. The finite sample behavior of the proposed estimators is evaluated through simulation studies, and the approach is applied to a set of real life data collected from a randomized clinical trial.  相似文献   

14.
The situation considered in this paper is that in which a single complete sample and an additional set of k censored samples, each of which is censòred both above as well as below, are vailable from the gamma with known integervalued shape parameter. The purpose hers is to predict an order statistic in the future sample, that is., in the (k+1)-th sample (or at stage k), based on the aarlier samples. For this purpose, a predictive dinfcrTout Ion is obtained for the general case of gammn distribution with knowa shape parameter. Particular cases including exponential are considered. A discussion on the comparison between the variances of the complete sample case and that of the censored case is given, An illustrative example is provided by a simulated life test.  相似文献   

15.
Sometimes additive hazard rate model becomes more important to study than the celebrated (Cox, 1972) proportional hazard rate model. But the concept of the hazard function is sometimes abstract, in comparison to the concept of mean residual life function. In this paper, we have defined a new model called ‘dynamic additive mean residual life model’ where the covariates are time dependent, and study the closure of this model under different stochastic orders.  相似文献   

16.
For reliability-critical and expensive products, it is necessary to estimate their residual lives based on available information, such as the degradation data, so that proper maintenance actions can be arranged to reduce or even avoid the occurrence of failures. In this work, by assuming that the product-to-product variability of the degradation is characterized by a skew-normal distribution, a generalized Wiener process-based degradation model is developed. Following that, the issue of residual life (RL) estimation of the target product is addressed in detail. The proposed degradation model provides greater flexibility to capture a variety of degradation processes, since several commonly used Wiener process-based degradation models can be seen as special cases. Through the EM algorism, the population-based degradation information is used to estimate the parameters of the model. Whenever new degradation measurement information of the target product becomes available, the degradation model is first updated based on the Bayesian method. In this way, the RL of the target product can be estimated in an adaptive manner. Finally, the developed methodology is demonstrated by a simulation study.  相似文献   

17.
It is often the case that high-dimensional data consist of only a few informative components. Standard statistical modeling and estimation in such a situation is prone to inaccuracies due to overfitting, unless regularization methods are practiced. In the context of classification, we propose a class of regularization methods through shrinkage estimators. The shrinkage is based on variable selection coupled with conditional maximum likelihood. Using Stein's unbiased estimator of the risk, we derive an estimator for the optimal shrinkage method within a certain class. A comparison of the optimal shrinkage methods in a classification context, with the optimal shrinkage method when estimating a mean vector under a squared loss, is given. The latter problem is extensively studied, but it seems that the results of those studies are not completely relevant for classification. We demonstrate and examine our method on simulated data and compare it to feature annealed independence rule and Fisher's rule.  相似文献   

18.
Built on Skaug and Tjøstheim's approach, this paper proposes a new test for serial independence by comparing the pairwise empirical distribution functions of a time series with the products of its marginals for various lags, where the number of lags increases with the sample size and different lags are assigned different weights. Typically, the more recent information receives a larger weight. The test has some appealing attributes. It is consistent against all pairwise dependences and is powerful against alternatives whose dependence decays to zero as the lag increases. Although the test statistic is a weighted sum of degenerate Cramér–von Mises statistics, it has a null asymptotic N (0, 1) distribution. The test statistic and its limit distribution are invariant to any order preserving transformation. The test applies to time series whose distributions can be discrete or continuous, with possibly infinite moments. Finally, the test statistic only involves ranking the observations and is computationally simple. It has the advantage of avoiding smoothed nonparametric estimation. A simulation experiment is conducted to study the finite sample performance of the proposed test in comparison with some related tests.  相似文献   

19.
In this paper we consider generalized linear models for binary data subject to inequality constraints on the regression coefficients, and propose a simple and efficient Bayesian method for parameter estimation and model selection by using Markov chain Monte Carlo (MCMC). In implementing MCMC, we introduce appropriate latent variables and use a simple approximation of a link function, to resolve computational difficulties and obtain convenient forms for full conditional posterior densities of elements of parameters. Bayes factors are computed via the Savage-Dickey density ratios and the method of Oh (Comput. Stat. Data Anal. 29:411–427, 1999), for which posterior samples from the full model with no degenerate parameter and the full conditional posterior densities of elements are needed. Since it uses one set of posterior samples from the full model for any model in consideration, it performs simultaneous comparison of all possible models and is very efficient compared with other model selection methods which require one to fit all candidate models. A simulation study shows that significant improvements can be made by taking the constraints into account. Real data on purchase intention of a product subject to order constraints is analyzed by using the proposed method. The analysis results show that there exist some price changes which significantly affect the consumer behavior. The results also show the importance of simultaneous comparison of models rather than separate pairwise comparisons of models since the latter may yield misleading results from ignoring possible correlations between parameters.  相似文献   

20.
In this study, a new per-field classification method is proposed for supervised classification of remotely sensed multispectral image data of an agricultural area using Gaussian mixture discriminant analysis (MDA). For the proposed per-field classification method, multivariate Gaussian mixture models constructed for control and test fields can have fixed or different number of components and each component can have different or common covariance matrix structure. The discrimination function and the decision rule of this method are established according to the average Bhattacharyya distance and the minimum values of the average Bhattacharyya distances, respectively. The proposed per-field classification method is analyzed for different structures of a covariance matrix with fixed and different number of components. Also, we classify the remotely sensed multispectral image data using the per-pixel classification method based on Gaussian MDA.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号