首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The non-homogeneous Poisson process (NHPP) model is a very important class of software reliability models and is widely used in software reliability engineering. NHPPs are characterized by their intensity functions. In the literature it is usually assumed that the functional forms of the intensity functions are known and only some parameters in intensity functions are unknown. The parametric statistical methods can then be applied to estimate or to test the unknown reliability models. However, in realistic situations it is often the case that the functional form of the failure intensity is not very well known or is completely unknown. In this case we have to use functional (non-parametric) estimation methods. The non-parametric techniques do not require any preliminary assumption on the software models and then can reduce the parameter modeling bias. The existing non-parametric methods in the statistical methods are usually not applicable to software reliability data. In this paper we construct some non-parametric methods to estimate the failure intensity function of the NHPP model, taking the particularities of the software failure data into consideration.  相似文献   

2.
Alternative methods of estimating properties of unknown distributions include the bootstrap and the smoothed bootstrap. In the standard bootstrap setting, Johns (1988) introduced an importance resam¬pling procedure that results in more accurate approximation to the bootstrap estimate of a distribution function or a quantile. With a suitable “exponential tilting” similar to that used by Johns, we derived a smoothed version of importance resampling in the framework of the smoothed bootstrap. Smoothed importance resampling procedures were developed for the estimation of distribution functions of the Studentized mean, the Studentized variance, and the correlation coefficient. Implementation of these procedures are presented via simulation results which concentrate on the problem of estimation of distribution functions of the Studentized mean and Studentized variance for different sample sizes and various pre-specified smoothing bandwidths for the normal data; additional simulations were conducted for the estimation of quantiles of the distribution of the Studentized mean under an optimal smoothing bandwidth when the original data were simulated from three different parent populations: lognormal, t(3) and t(10). These results suggest that in cases where it is advantageous to use the smoothed bootstrap rather than the standard bootstrap, the amount of resampling necessary might be substantially reduced by the use of importance resampling methods and the efficiency gains depend on the bandwidth used in the kernel density estimation.  相似文献   

3.
The two-parameter generalized exponential distribution has been used recently quite extensively to analyze lifetime data. In this paper the two-parameter generalized exponential distribution has been embedded in a larger class of distributions obtained by introducing another shape parameter. Because of the additional shape parameter, more flexibility has been introduced in the family. It is observed that the new family is positively skewed, and has increasing, decreasing, unimodal and bathtub shaped hazard functions. It can be observed as a proportional reversed hazard family of distributions. This new family of distributions is analytically quite tractable and it can be used quite effectively to analyze censored data also. Analyses of two data sets are performed and the results are quite satisfactory.  相似文献   

4.
Exponential distribution has an extensive application in reliability. Introducing shape parameter to this distribution have produced various distribution functions. In their study in 2009, Gupta and Kundu brought another distribution function using Azzalini's method, which is applicable in reliability and named as weighted exponential (WE) distribution. The parameters of this distribution function have been recently estimated by the above two authors in classical statistics. In this paper, Bayesian estimates of the parameters are derived. To achieve this purpose we use Lindley's approximation method for the integrals that cannot be solved in closed form. Furthermore, a Gibbs sampling procedure is used to draw Markov chain Monte Carlo samples from the posterior distribution indirectly and then the Bayes estimates of parameters are derived. The estimation of reliability and hazard functions are also discussed. At the end of the paper, some comparisons between classical and Bayesian estimation methods are studied by using Monte Carlo simulation study. The simulation study incorporates complete and Type-II censored samples.  相似文献   

5.
The generalized normal Laplace distribution has been used in financial modeling because of its skewness and excess kurtosis. To estimate its parameters, we use a method based on minimizing the quadratic distance between the real and imaginary parts of the empirical and theoretical characteristic functions. The quadratic distance estimator (QDE) obtained is shown to be robust, consistent, and with an asymptotically normal distribution. The goodness-of-fit test statistics presented follow an asymptotic chi-square distribution. The performance of the QDE is illustrated by simulation results and an application to financial data.  相似文献   

6.
This paper is concerned with the analysis of ordinal data through linear models for rank function measures.Primary attention is directed at pairwise Mann-Whitney statistics for which dimension reduction is managed by use of a Bradley-Terry log-linear structure.The nature of linear models for such quantities is contrasted with that for mean ranks (or ridits).Aspects of application are illustrated with an example for which results of other methods are also given.  相似文献   

7.
The non-parametric maximum likelihood estimator (NPMLE) of the distribution function with doubly censored data can be computed using the self-consistent algorithm (Turnbull, 1974). We extend the self-consistent algorithm to include a constraint on the NPMLE. We then show how to construct confidence intervals and test hypotheses based on the NPMLE via the empirical likelihood ratio. Finally, we present some numerical comparisons of the performance of the above method with another method that makes use of the influence functions.  相似文献   

8.
In this article, we estimate the parameters of exponential Pareto II distribution by two new methods. The first one is based on the principle of maximum entropy (POME) and the second is by Kullback–Leibler divergence of survival function (KLS). Monte Carlo simulated data are used to evaluate these methods and compare them with the maximum likelihood method. Finally, we fit this distribution to a set of real data by estimation procedures.  相似文献   

9.
For comparing two cumulative hazard functions, we consider an extension of the Kullback–Leibler information to the cumulative hazard function, which is concerning the ratio of cumulative hazard functions. Then we consider its estimate as a goodness-of-fit test with the Type II censored data. For an exponential null distribution, the proposed test statistic is shown to outperform other test statistics based on the empirical distribution function in the heavy censoring case against the increasing hazard alternatives.  相似文献   

10.
In this article, we introduce and study a class of distributions that has linear hazard quantile function. Various distributional properties and reliability characteristics of the class are studied. Some characterizations of the class of distributions are presented. The method of L-moments is employed to estimate parameters of the class of distributions. Finally, we apply the proposed class to a real data set.  相似文献   

11.
Detecting the number of signals and estimating the parameters of the signals is an important problem in signal processing. Quite a number of papers appeared in the last twenty years regarding the estimation of the parameters of the sinusoidal components but not that much of attention has been given in estimating the number of terms present in a sinusoidal signal. Fuchs developed a criterion based on the perturbation analysis of the data auto correlation matrix to estimate the number of sinusoids, which is in some sense a subjective-based method. Recently Reddy and Biradar proposed two criteria based on AIC and MDL and developed an analytical framework for analyzing the performance of these criteria. In this paper we develop a method using the extended order modelling and singular value decomposition technique similar to that of Reddy and Biradar. We use penalty function technique but instead of using any fixed penalty function like AIC or MDL, a class of penalty functions satisfying some special properties has been used. We prove that any penalty function from that special class will give consistent estimate under the assumptions that the error random variables are independent and identically distributed with mean zero and finite variance. We also obtain the probabilities of wrong detection for any particular penalty function under somewhat weaker assumptions than that of Reddy and Biradar of Kaveh et al. It gives some idea to choose the proper penalty function for any particular model. Simulations are performed to verify the usefulness of the analysis and to compare our methods with the existing ones.  相似文献   

12.
In survival analysis, it is often of interest to test whether or not two survival time distributions are equal, specifically in the presence of censored data. One very popular test statistic utilized in this testing procedure is the weighted logrank statistic. Much attention has been focused on finding flexible weight functions to use within the weighted logrank statistic, and we propose yet another. We demonstrate our weight function to be more stable than one of the most popular, which is given by Fleming and Harrington, by means of asymptotic normal tests, bootstrap tests and permutation tests performed on two datasets with a variety of characteristics.  相似文献   

13.
In this paper, we consider three different mixture models based on the Birnbaum-Saunders (BS) distribution, viz., (1) mixture of two different BS distributions, (2) mixture of a BS distribution and a length-biased version of another BS distribution, and (3) mixture of a BS distribution and its length-biased version. For all these models, we study their characteristics including the shape of their density and hazard rate functions. For the maximum likelihood estimation of the model parameters, we use the EM algorithm. For the purpose of illustration, we analyze two data sets related to enzyme and depressive condition problems. In the case of the enzyme data, it is shown that Model 1 provides the best fit, while for the depressive condition data, it is shown all three models fit well with Model 3 providing the best fit.  相似文献   

14.
The cumulative incidence function is of great importance in the analysis of survival data when competing risks are present. Parametric modeling of such functions, which are by nature improper, suggests the use of improper distributions. One frequently used improper distribution is that of Gompertz, which captures only monotone hazard shapes. In some applications, however, subdistribution hazard estimates have been observed with unimodal shapes. An extension to the Gompertz distribution is presented which can capture unimodal as well as monotone hazard shapes. Important properties of the proposed distribution are discussed, and the proposed distribution is used to analyze survival data from a breast cancer clinical trial.  相似文献   

15.
Modelling volatility in the form of conditional variance function has been a popular method mainly due to its application in financial risk management. Among others, we distinguish the parametric GARCH models and the nonparametric local polynomial approximation using weighted least squares or gaussian likelihood function. We introduce an alternative likelihood estimate of conditional variance and we show that substitution of the error density with its estimate yields similar asymptotic properties, that is, the proposed estimate is adaptive to the error distribution. Theoretical comparison with existing estimates reveals substantial gains in efficiency, especially if error distribution has fatter tails than Gaussian distribution. Simulated data confirm the theoretical findings while an empirical example demonstrates the gains of the proposed estimate.  相似文献   

16.
Skew‐symmetric models offer a very flexible class of distributions for modelling data. These distributions can also be viewed as selection models for the symmetric component of the specified skew‐symmetric distribution. The estimation of the location and scale parameters corresponding to the symmetric component is considered here, with the symmetric component known. Emphasis is placed on using the empirical characteristic function to estimate these parameters. This is made possible by an invariance property of the skew‐symmetric family of distributions, namely that even transformations of random variables that are skew‐symmetric have a distribution only depending on the symmetric density. A distance metric between the real components of the empirical and true characteristic functions is minimized to obtain the estimators. The method is semiparametric, in that the symmetric component is specified, but the skewing function is assumed unknown. Furthermore, the methodology is extended to hypothesis testing. Two tests for a null hypothesis of specific parameter values are considered, as well as a test for the hypothesis that the symmetric component has a specific parametric form. A resampling algorithm is described for practical implementation of these tests. The outcomes of various numerical experiments are presented.  相似文献   

17.
In the present paper, we introduce and study a class of distributions that has the linear mean residual quantile function. Various distributional properties and reliability characteristics of the class are studied. Some characterizations of the class of distributions are presented. We then present generalizations of this class of distributions using the relationship between various quantile based reliability measures. The method of L-moments is employed to estimate parameters of the class of distributions. Finally, we apply the proposed class of distributions to a real data set.  相似文献   

18.
Moment generating functions and more generally, integral transforms for goodness-of-fit tests have been in use in the last several decades. Given a set of observations, the empirical transforms are easy to compute, being simply a sample mean, and due to uniqueness properties, these functions can be used for goodness-of-fit tests. This paper focuses on time series observations from a stationary process for which the moment generating function exists and the correlations have long-memory. For long-memory processes, the infinite sum of the correlations diverges and the realizations tend to have spurious trend like patterns where there may be none. Our aim is to use the empirical moment generating function to test the null hypothesis that the marginal distribution is Gaussian. We provide a simple proof of a central limit theorem using ideas from Gaussian subordination models (Taqqu, 1975) and derive critical regions for a graphical test of normality, namely the T3-plot ( Ghosh, 1996). Some simulated and real data examples are used for illustration.  相似文献   

19.
Estimating functions can have multiple roots. In such cases, the statistician must choose among the roots to estimate the parameter. Standard asymptotic theory shows that in a wide variety of cases, there exists a unique consistent root, and that this root will lie asymptotically close to other consistent (possibly inefficient) estimators for the parameter. For this reason, attention has largely focused on the problem of selecting this root and determining its approximate asymptotic distribution. In this paper, however, we concentrate on the exact distribution of the roots as a random set. In particular, we propose the use of higher-order root intensity functions as a tool for examining the properties of the roots and determining their most problematic features. The use of root intensity functions of first and second order is illustrated by application to the score function for the Cauchy location model.  相似文献   

20.
Loss functions express the loss to society, incurred through the use of a product, in monetary units. Underlying this concept is the notion that any deviation from target of any product characteristic implies a degradation in the product performance and hence a loss. Spiring (1993), in response to criticisms of the quadratic loss function, developed the reflected normal loss function, which is based on the normal density function. We give some modifications of these loss functions to simplify their application and provide a framework for the reflected normal loss function that accomodates a broader class of symmetric loss situations. These modifications also facilitate the unification of both of these loss functions and their comparison through expected loss. Finally, we give a simple method for determing the parameters of the modified reflected normal loss function based on loss information for multiple values of the product characteristic, and an example to illustrate the flexibility of the proposed model and the determination of its parameters.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号