首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   16篇
  免费   0篇
管理学   1篇
统计学   15篇
  2021年   2篇
  2018年   1篇
  2015年   1篇
  2013年   3篇
  2012年   4篇
  2011年   1篇
  2008年   1篇
  2006年   1篇
  2004年   1篇
  1977年   1篇
排序方式: 共有16条查询结果,搜索用时 15 毫秒
11.
Determination of preventive maintenance is an important issue for systems under degradation. A typical maintenance policy calls for complete preventive repair actions at pre-scheduled times and minimal repair actions whenever a failure occurs. Under minimal repair, failures are modeled according to a non homogeneous Poisson process. A perfect preventive maintenance restores the system to the as good as new condition. The motivation for this article was a maintenance data set related to power switch disconnectors. Two different types of failures could be observed for these systems according to their causes. The major difference between these types of failures is their costs. Assuming that the system will be in operation for an infinite time, we find the expected cost per unit of time for each preventive maintenance policy and hence obtain the optimal strategy as a function of the processes intensities. Assuming a parametrical form for the intensity function, large sample estimates for the optimal maintenance check points are obtained and discussed.  相似文献   
12.
Survival data involving silent events are often subject to interval censoring (the event is known to occur within a time interval) and classification errors if a test with no perfect sensitivity and specificity is applied. Considering the nature of this data plays an important role in estimating the time distribution until the occurrence of the event. In this context, we incorporate validation subsets into the parametric proportional hazard model, and show that this additional data, combined with Bayesian inference, compensate the lack of knowledge about test sensitivity and specificity improving the parameter estimates. The proposed model is evaluated through simulation studies, and Bayesian analysis is conducted within a Gibbs sampling procedure. The posterior estimates obtained under validation subset models present lower bias and standard deviation compared to the scenario with no validation subset or the model that assumes perfect sensitivity and specificity. Finally, we illustrate the usefulness of the new methodology with an analysis of real data about HIV acquisition in female sex workers that have been discussed in the literature.  相似文献   
13.
Estimation of the lifetime distribution of industrial components and systems yields very important information for manufacturers and consumers. However, obtaining reliability data is time consuming and costly. In this context, degradation tests are a useful alternative approach to lifetime and accelerated life tests in reliability studies. The approximate method is one of the most used techniques for degradation data analysis. It is very simple to understand and easy to implement numerically in any statistical software package. This paper uses time series techniques in order to propose a modified approximate method (MAM). The MAM improves the standard one in two aspects: (1) it uses previous observations in the degradation path as a Markov process for future prediction and (2) it is not necessary to specify a parametric form for the degradation path. Characteristics of interest such as mean or median time to failure and percentiles, among others, are obtained by using the modified method. A simulation study is performed in order to show the improved properties of the modified method over the standard one. Both methods are also used to estimate the failure time distribution of the fatigue-crack-growth data set.  相似文献   
14.
Traditionally, reliability assessment of devices has been based on life tests (LTs) or accelerated life tests (ALTs). However, these approaches are not practical for high-reliability devices which are not likely to fail in experiments of reasonable length. For these devices, LTs or ALTs will end up with a high censoring rate compromising the traditional estimation methods. An alternative approach is to monitor the devices for a period of time and assess their reliability from the changes in performance (degradation) observed during the experiment. In this paper, we present a model to evaluate the problem of train wheel degradation, which is related to the failure modes of train derailments. We first identify the most significant working conditions affecting the wheel wear using a nonlinear mixed-effects (NLME) model where the log-rate of wear is a linear function of some working conditions such as side, truck and axle positions. Next, we estimate the failure time distribution by working condition analytically. Point and interval estimates of reliability figures by working condition are also obtained. We compare the results of the analysis via an NLME to the ones obtained by an approximate degradation analysis.  相似文献   
15.
It is shown that, when measuring time in the Total Time on Test scale, the superposition of overlapping realizations of a nonhomogeneous Poisson process is also a Poisson process and is sufficient for inferential purposes. Hence, many nonparametric procedures which are available when only one realization is observed can be easily extended for the overlapping realizations setup. These include, for instance, the constrained maximum likelihood estimator of a monotonic intensity and bootstrap confidence bands based on Kernel estimates of the intensity. The kernel estimate proposed here performs the smoothing in the Total Time on Test scale and it is shown to behave approximately as a usual kernel estimate but with a variable bandwidth which is inversely proportional to the number of realizations at-risk. Likewise, bootstrap samples can be obtained from the single realization of the superimposed process. The methods are illustrated using a real data set consisting of the failure histories of 40 electrical power transformers.  相似文献   
16.
In practice, data are often measured repeatedly on the same individual at several points in time. Main interest often relies in characterizing the way the response changes in time, and the predictors of that change. Marginal, mixed and transition are frequently considered to be the main models for continuous longitudinal data analysis. These approaches are proposed primarily for balanced longitudinal design. However, in clinic studies, data are usually not balanced and some restrictions are necessary in order to use these models. This paper was motivated by a data set related to longitudinal height measurements in children of HIV-infected mothers that was recorded at the university hospital of the Federal University in Minas Gerais, Brazil. This data set is severely unbalanced. The goal of this paper is to assess the application of continuous longitudinal models for the analysis of unbalanced data set.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号