首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Some problems of point and interval prediction in a trend-renewal process (TRP) are considered. TRP’s, whose realizations depend on a renewal distribution as well as on a trend function, comprise the non-homogeneous Poisson and renewal processes and serve as useful reliability models for repairable systems. For these processes, some possible ideas and methods for constructing the predicted next failure time and the prediction interval for the next failure time are presented. A method of constructing the predictors is also presented in the case when the renewal distribution of a TRP is unknown (and consequently, the likelihood function of this process is unknown). Using the prediction methods proposed, simulations are conducted to compare the predicted times and prediction intervals for a TRP with completely unknown renewal distribution with the corresponding results for the TRP with a Weibull renewal distribution and power law type trend function. The prediction methods are also applied to some real data.  相似文献   

2.
Abstract.  The likelihood ratio statistic for testing pointwise hypotheses about the survival time distribution in the current status model can be inverted to yield confidence intervals (CIs). One advantage of this procedure is that CIs can be formed without estimating the unknown parameters that figure in the asymptotic distribution of the maximum likelihood estimator (MLE) of the distribution function. We discuss the likelihood ratio-based CIs for the distribution function and the quantile function and compare these intervals to several different intervals based on the MLE. The quantiles of the limiting distribution of the MLE are estimated using various methods including parametric fitting, kernel smoothing and subsampling techniques. Comparisons are carried out both for simulated data and on a data set involving time to immunization against rubella. The comparisons indicate that the likelihood ratio-based intervals are preferable from several perspectives.  相似文献   

3.
Sampling procedures using randomized observation-points are suggested for estimating parameters in renewal and Markov renewal models. The usual asymptotic properties of the maximum likelihood method are shown to hold. The method we suggest provides a solution to the ML estimation problem in either or both of the following situations: (i) observations on between-event intervals are unavailable, (ii) the interval densities are unknown or difficult to evaluate while their Laplace-Stieltjes transforms are known.  相似文献   

4.
We consider the problem of estimating unknown parameters, reliability function and hazard function of a two parameter bathtub-shaped distribution on the basis of progressive type-II censored sample. The maximum likelihood estimators and Bayes estimators are derived for two unknown parameters, reliability function and hazard function. The Bayes estimators are obtained against squared error, LINEX and entropy loss functions. Also, using the Lindley approximation method we have obtained approximate Bayes estimators against these loss functions. Some numerical comparisons are made among various proposed estimators in terms of their mean square error values and some specific recommendations are given. Finally, two data sets are analyzed to illustrate the proposed methods.  相似文献   

5.
A new method for estimating the proportion of null effects is proposed for solving large-scale multiple comparison problems. It utilises maximum likelihood estimation of nonparametric mixtures, which also provides a density estimate of the test statistics. It overcomes the problem of the usual nonparametric maximum likelihood estimator that cannot produce a positive probability at the location of null effects in the process of estimating nonparametrically a mixing distribution. The profile likelihood is further used to help produce a range of null proportion values, corresponding to which the density estimates are all consistent. With a proper choice of a threshold function on the profile likelihood ratio, the upper endpoint of this range can be shown to be a consistent estimator of the null proportion. Numerical studies show that the proposed method has an apparently convergent trend in all cases studied and performs favourably when compared with existing methods in the literature.  相似文献   

6.
Summary The use of shifted (or zero-truncated) generalized Poisson distribution to describe the occurrence of events in production processes is considered. The methods of moments and maximum likelihood are proposed for estimating the parameters of shifted generalized Poisson distribution. Control charts for the total number of events and for the average number of events are developed. Finally, a numerical example is used to illustrate the construction of control charts.  相似文献   

7.
Based on progressively type-II censored data, the problem of estimating unknown parameters and reliability function of a two-parameter generalized half-normal distribution is considered. Maximum likelihood estimates are obtained by applying expectation-maximization algorithm. Since they do not have closed forms, approximate maximum likelihood estimators are proposed. Several Bayesian estimates with respect to different symmetric and asymmetric loss functions such as squared error, LINEX and general entropy are calculated. The Lindley approximation method is applied to determine Bayesian estimates. Monte Carlo simulations are performed to compare the performances of the different methods. Finally, one real data set is analysed.  相似文献   

8.
Maximum Likelihood Estimations and EM Algorithms with Length-biased Data   总被引:2,自引:0,他引:2  
Length-biased sampling has been well recognized in economics, industrial reliability, etiology applications, epidemiological, genetic and cancer screening studies. Length-biased right-censored data have a unique data structure different from traditional survival data. The nonparametric and semiparametric estimations and inference methods for traditional survival data are not directly applicable for length-biased right-censored data. We propose new expectation-maximization algorithms for estimations based on full likelihoods involving infinite dimensional parameters under three settings for length-biased data: estimating nonparametric distribution function, estimating nonparametric hazard function under an increasing failure rate constraint, and jointly estimating baseline hazards function and the covariate coefficients under the Cox proportional hazards model. Extensive empirical simulation studies show that the maximum likelihood estimators perform well with moderate sample sizes and lead to more efficient estimators compared to the estimating equation approaches. The proposed estimates are also more robust to various right-censoring mechanisms. We prove the strong consistency properties of the estimators, and establish the asymptotic normality of the semi-parametric maximum likelihood estimators under the Cox model using modern empirical processes theory. We apply the proposed methods to a prevalent cohort medical study. Supplemental materials are available online.  相似文献   

9.
Models for which the likelihood function can be evaluated only up to a parameter-dependent unknown normalizing constant, such as Markov random field models, are used widely in computer science, statistical physics, spatial statistics, and network analysis. However, Bayesian analysis of these models using standard Monte Carlo methods is not possible due to the intractability of their likelihood functions. Several methods that permit exact, or close to exact, simulation from the posterior distribution have recently been developed. However, estimating the evidence and Bayes’ factors for these models remains challenging in general. This paper describes new random weight importance sampling and sequential Monte Carlo methods for estimating BFs that use simulation to circumvent the evaluation of the intractable likelihood, and compares them to existing methods. In some cases we observe an advantage in the use of biased weight estimates. An initial investigation into the theoretical and empirical properties of this class of methods is presented. Some support for the use of biased estimates is presented, but we advocate caution in the use of such estimates.  相似文献   

10.
Summary.  We introduce a flexible marginal modelling approach for statistical inference for clustered and longitudinal data under minimal assumptions. This estimated estimating equations approach is semiparametric and the proposed models are fitted by quasi-likelihood regression, where the unknown marginal means are a function of the fixed effects linear predictor with unknown smooth link, and variance–covariance is an unknown smooth function of the marginal means. We propose to estimate the nonparametric link and variance–covariance functions via smoothing methods, whereas the regression parameters are obtained via the estimated estimating equations. These are score equations that contain nonparametric function estimates. The proposed estimated estimating equations approach is motivated by its flexibility and easy implementation. Moreover, if data follow a generalized linear mixed model, with either a specified or an unspecified distribution of random effects and link function, the model proposed emerges as the corresponding marginal (population-average) version and can be used to obtain inference for the fixed effects in the underlying generalized linear mixed model, without the need to specify any other components of this generalized linear mixed model. Among marginal models, the estimated estimating equations approach provides a flexible alternative to modelling with generalized estimating equations. Applications of estimated estimating equations include diagnostics and link selection. The asymptotic distribution of the proposed estimators for the model parameters is derived, enabling statistical inference. Practical illustrations include Poisson modelling of repeated epileptic seizure counts and simulations for clustered binomial responses.  相似文献   

11.
The authors show how saddlepoint techniques lead to highly accurate approximations for Bayesian predictive densities and cumulative distribution functions in stochastic model settings where the prior is tractable, but not necessarily the likelihood or the predictand distribution. They consider more specifically models involving predictions associated with waiting times for semi‐Markov processes whose distributions are indexed by an unknown parameter θ. Bayesian prediction for such processes when they are not stationary is also addressed and the inverse‐Gaussian based saddlepoint approximation of Wood, Booth & Butler (1993) is shown to accurately deal with the nonstationarity whereas the normal‐based Lugannani & Rice (1980) approximation cannot, Their methods are illustrated by predicting various waiting times associated with M/M/q and M/G/1 queues. They also discuss modifications to the matrix renewal theory needed for computing the moment generating functions that are used in the saddlepoint methods.  相似文献   

12.
This paper proposes the singly truncated normal distribution as a model for estimating radiance measurements from satellite-borne infrared sensors. These measurements are made in order to estimate sea surface temperatures which can be related to radiances. Maximum likelihood estimation is used to provide estimates for the unknown parameters. In particular, a procedure is described for estimating clear radiances in the presence of clouds and the Kolmogorov-Smirnov statistic is used to test goodness-of-fit of the measurements to the singly truncated normal distribution. Tables of quantile values of the Kolmogorov-Smirnov statistic for several values of the truncation point are generated from Monie Carlo experiment Mnally a numerical emample using satetic data is presented to illustrate the application of the procedures.  相似文献   

13.
The Rasch model is useful in the problem of estimating the population size from multiple incomplete lists. It is of great interest to tell whether there are list effects and whether individuals differ in their catchabilities. These two important model selection problems can be easily addressed conditionally. A conditional likelihood ratio test is used to evaluate the list effects and several graphical methods are used to diagnose the individual catchabilities, while neither the unknown population size nor the unknown mixing distribution of individual catchabilities is required to be estimated. Three epidemiological applications are used for illustration.  相似文献   

14.
ABSTRACT

In this article, a two-parameter generalized inverse Lindley distribution capable of modeling a upside-down bathtub-shaped hazard rate function is introduced. Some statistical properties of proposed distribution are explicitly derived here. The method of maximum likelihood, least square, and maximum product spacings are used for estimating the unknown model parameters and also compared through the simulation study. The approximate confidence intervals, based on a normal and a log-normal approximation, are also computed. Two algorithms are proposed for generating a random sample from the proposed distribution. A real data set is modeled to illustrate its applicability, and it is shown that our distribution fits much better than some other existing inverse distributions.  相似文献   

15.
A hybrid censoring is a mixture of Type-I and Type-II censoring schemes. This article presents the statistical inferences on Weibull parameters when the data are hybrid censored. The maximum likelihood estimators (MLEs) and the approximate maximum likelihood estimators are developed for estimating the unknown parameters. Asymptotic distributions of the MLEs are used to construct approximate confidence intervals. Bayes estimates and the corresponding highest posterior density credible intervals of the unknown parameters are obtained under suitable priors on the unknown parameters and using the Gibbs sampling procedure. The method of obtaining the optimum censoring scheme based on the maximum information measure is also developed. Monte Carlo simulations are performed to compare the performances of the different methods and one data set is analyzed for illustrative purposes.  相似文献   

16.
When the unobservable Markov chain in a hidden Markov model is stationary the marginal distribution of the observations is a finite mixture with the number of terms equal to the number of the states of the Markov chain. This suggests the number of states of the unobservable Markov chain can be estimated by determining the number of mixture components in the marginal distribution. This paper presents new methods for estimating the number of states in a hidden Markov model, and coincidentally the unknown number of components in a finite mixture, based on penalized quasi‐likelihood and generalized quasi‐likelihood ratio methods constructed from the marginal distribution. The procedures advocated are simple to calculate, and results obtained in empirical applications indicate that they are as effective as current available methods based on the full likelihood. Under fairly general regularity conditions, the methods proposed generate strongly consistent estimates of the unknown number of states or components.  相似文献   

17.

Ordinal data are often modeled using a continuous latent response distribution, which is partially observed through windows of adjacent intervals defined by cutpoints. In this paper we propose the beta distribution as a model for the latent response. The beta distribution has several advantages over the other common distributions used, e.g. , normal and logistic. In particular, it enables separate modeling of location and dispersion effects which is essential in the Taguchi method of robust design. First, we study the problem of estimating the location and dispersion parameters of a single beta distribution (representing a single treatment) from ordinal data assuming known equispaced cutpoints. Two methods of estimation are compared: the maximum likelihood method and the method of moments. Two methods of treating the data are considered: in raw discrete form and in smoothed continuousized form. A large scale simulation study is carried out to compare the different methods. The mean square errors of the estimates are obtained under a variety of parameter configurations. Comparisons are made based on the ratios of the mean square errors (called the relative efficiencies). No method is universally the best, but the maximum likelihood method using continuousized data is found to perform generally well, especially for estimating the dispersion parameter. This method is also computationally much faster than the other methods and does not experience convergence difficulties in case of sparse or empty cells. Next, the problem of estimating unknown cutpoints is addressed. Here the multiple treatments setup is considered since in an actual application, cutpoints are common to all treatments, and must be estimated from all the data. A two-step iterative algorithm is proposed for estimating the location and dispersion parameters of the treatments, and the cutpoints. The proposed beta model and McCullagh's (1980) proportional odds model are compared by fitting them to two real data sets.  相似文献   

18.
The generalized exponential is the most commonly used distribution for analyzing lifetime data. This distribution has several desirable properties and it can be used quite effectively to analyse several skewed life time data. The main aim of this paper is to introduce absolutely continuous bivariate generalized exponential distribution using the method of Block and Basu (1974). In fact, the Block and Basu exponential distribution will be extended to the generalized exponential distribution. We call the new proposed model as the Block and Basu bivariate generalized exponential distribution, then, discuss its different properties. In this case the joint probability distribution function and the joint cumulative distribution function can be expressed in compact forms. The model has four unknown parameters and the maximum likelihood estimators cannot be obtained in explicit form. To compute the maximum likelihood estimators directly, one needs to solve a four dimensional optimization problem. The EM algorithm has been proposed to compute the maximum likelihood estimations of the unknown parameters. One data analysis is provided for illustrative purposes. Finally, we propose some generalizations of the proposed model and compare their models with each other.  相似文献   

19.
Survival data obtained from prevalent cohort study designs are often subject to length-biased sampling. Frequentist methods including estimating equation approaches, as well as full likelihood methods, are available for assessing covariate effects on survival from such data. Bayesian methods allow a perspective of probability interpretation for the parameters of interest, and may easily provide the predictive distribution for future observations while incorporating weak prior knowledge on the baseline hazard function. There is lack of Bayesian methods for analyzing length-biased data. In this paper, we propose Bayesian methods for analyzing length-biased data under a proportional hazards model. The prior distribution for the cumulative hazard function is specified semiparametrically using I-Splines. Bayesian conditional and full likelihood approaches are developed for analyzing simulated and real data.  相似文献   

20.
Parameter estimation of the generalized Pareto distribution—Part II   总被引:1,自引:0,他引:1  
This is the second part of a paper which focuses on reviewing methods for estimating the parameters of the generalized Pareto distribution (GPD). The GPD is a very important distribution in the extreme value context. It is commonly used for modeling the observations that exceed very high thresholds. The ultimate success of the GPD in applications evidently depends on the parameter estimation process. Quite a few methods exist in the literature for estimating the GPD parameters. Estimation procedures, such as the maximum likelihood (ML), the method of moments (MOM) and the probability weighted moments (PWM) method were described in Part I of the paper. We shall continue to review methods for estimating the GPD parameters, in particular methods that are robust and procedures that use the Bayesian methodology. As in Part I, we shall focus on those that are relatively simple and straightforward to be applied to real world data.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号