首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The likelihood function from a large sample is commonly assumed to be approximately a normal density function. The literature supports, under mild conditions, an approximate normal shape about the maximum; but typically a stronger result is needed: that the normalized likelihood itself is approximately a normal density. In a transformation-parameter context, we consider the likelihood normalized relative to right-invariant measure, and in the location case under moderate conditions show that the standardized version converges almost surely to the standard normal. Also in a transformation-parameter context, we show that almost sure convergence of the normalized and standardized likelihood to a standard normal implies that the standardized distribution for conditional inference converges almost surely to a corresponding standard normal. This latter result is of immediate use for a range of estimating, testing, and confidence procedures on a conditional-inference basis.  相似文献   

2.
The maximum likelihood estimation of parameters of the Poisson binomial distribution, based on a sample with exact and grouped observations, is considered by applying the EM algorithm (Dempster et al, 1977). The results of Louis (1982) are used in obtaining the observed information matrix and accelerating the convergence of the EM algorithm substantially. The maximum likelihood estimation from samples consisting entirely of complete (Sprott, 1958) or grouped observations are treated as special cases of the estimation problem mentioned above. A brief account is given for the implementation of the EM algorithm when the sampling distribution is the Neyman Type A since the latter is a limiting form of the Poisson binomial. Numerical examples based on real data are included.  相似文献   

3.
A sampling plan with a polynomial loss function for the exponential distribution is considered. From the distribution of the maximum likelihood estimator of the mean of an exponential distribution based on Type-I and Type-II hybrid censored samples, we obtain an explicit expression for the Bayes risk of a sampling plan with a quadratic loss function. Some numerical examples and comparisons are given to illustrate the effectiveness of the proposed method, and a robustness study reveals that the proposed optimal sampling plans are quite robust.  相似文献   

4.
In this paper, we introduce a new lifetime distribution by compounding exponential and Poisson–Lindley distributions, named the exponential Poisson–Lindley (EPL) distribution. A practical situation where the EPL distribution is most appropriate for modelling lifetime data than exponential–geometric, exponential–Poisson and exponential–logarithmic distributions is presented. We obtain the density and failure rate of the EPL distribution and properties such as mean lifetime, moments, order statistics and Rényi entropy. Furthermore, estimation by maximum likelihood and inference for large samples are discussed. The paper is motivated by two applications to real data sets and we hope that this model will be able to attract wider applicability in survival and reliability.  相似文献   

5.
ABSTRACT. In this paper we consider logspline density estimation for data that may be left-truncated or right-censored. For randomly left-truncated and right-censored data the product-limit estimator is known to be a consistent estimator of the survivor function, having a faster rate of convergence than many density estimators. The product-limit estimator and B-splines are used to construct the logspline density estimate for possibly censored or truncated data. Rates of convergence are established when the log-density function is assumed to be in a Besov space. An algorithm involving a procedure similar to maximum likelihood, stepwise knot addition, and stepwise knot deletion is proposed for the estimation of the density function based upon sample data. Numerical examples are used to show the finite-sample performance of inference based on the logspline density estimation.  相似文献   

6.
We present a maximum likelihood estimation procedure for the multivariate frailty model. The estimation is based on a Monte Carlo EM algorithm. The expectation step is approximated by averaging over random samples drawn from the posterior distribution of the frailties using rejection sampling. The maximization step reduces to a standard partial likelihood maximization. We also propose a simple rule based on the relative change in the parameter estimates to decide on sample size in each iteration and a stopping time for the algorithm. An important new concept is acquiring absolute convergence of the algorithm through sample size determination and an efficient sampling technique. The method is illustrated using a rat carcinogenesis dataset and data on vase lifetimes of cut roses. The estimation results are compared with approximate inference based on penalized partial likelihood using these two examples. Unlike the penalized partial likelihood estimation, the proposed full maximum likelihood estimation method accounts for all the uncertainty while estimating standard errors for the parameters.  相似文献   

7.
In reliability studies the three quantities (1) the survival function, (2) the failure rate and (3) the mean residual life function are all equivalent in the sense that given one of them, the other two can be determined. In this paper we have considered the class of exponential type distributions and studied its mixture. Given any one of the above mentioned three quantities of the mixture a method is developed for determining the mixing density. Some examples are provided as illustrations. Some well known results follow trivially.  相似文献   

8.
The paper is focussing on some recent developments in nonparametric mixture distributions. It discusses nonparametric maximum likelihood estimation of the mixing distribution and will emphasize gradient type results, especially in terms of global results and global convergence of algorithms such as vertex direction or vertex exchange method. However, the NPMLE (or the algorithms constructing it) provides also an estimate of the number of components of the mixing distribution which might be not desirable for theoretical reasons or might be not allowed from the physical interpretation of the mixture model. When the number of components is fixed in advance, the before mentioned algorithms can not be used and globally convergent algorithms do not exist up to now. Instead, the EM algorithm is often used to find maximum likelihood estimates. However, in this case multiple maxima are often occuring. An example from a meta-analyis of vitamin A and childhood mortality is used to illustrate the considerable, inferential importance of identifying the correct global likelihood. To improve the behavior of the EM algorithm we suggest a combination of gradient function steps and EM steps to achieve global convergence leading to the EM algorithm with gradient function update (EMGFU). This algorithms retains the number of components to be exactly k and typically converges to the global maximum. The behavior of the algorithm is highlighted at hand of several examples.  相似文献   

9.
Mixture models are used in a large number of applications yet there remain difficulties with maximum likelihood estimation. For instance, the likelihood surface for finite normal mixtures often has a large number of local maximizers, some of which do not give a good representation of the underlying features of the data. In this paper we present diagnostics that can be used to check the quality of an estimated mixture distribution. Particular attention is given to normal mixture models since they frequently arise in practice. We use the diagnostic tools for finite normal mixture problems and in the nonparametric setting where the difficult problem of determining a scale parameter for a normal mixture density estimate is considered. A large sample justification for the proposed methodology will be provided and we illustrate its implementation through several examples  相似文献   

10.
Nonparametric estimation of the probability density function f° of a lifetime distribution based on arbitrarily right-censor-ed observations from f° has been studied extensively in recent years. In this paper the density estimators from censored data that have been obtained to date are outlined. Histogram, kernel-type, maximum likelihood, series-type, and Bayesian nonparametric estimators are included. Since estimation of the hazard rate function can be considered as giving a density estimate, all known results concerning nonparametric hazard rate estimation from censored samples are also briefly mentioned.  相似文献   

11.
We discuss in this paper the assessment of local influence in univariate elliptical linear regression models. This class includes all symmetric continuous distributions, such as normal, Student-t, Pearson VII, exponential power and logistic, among others. We derive the appropriate matrices for assessing the local influence on the parameter estimates and on predictions by considering as influence measures the likelihood displacement and a distance based on the Pearson residual. Two examples with real data are given for illustration.  相似文献   

12.
For the generalized exponential (GE) distribution, the maximum likelihood method does not provide an explicit estimator for the scale parameter based on a progressively Type-II censored sample. This paper provides a simple method of deriving an explicit estimator by approximating the likelihood function. A Monte Carlo simulation is used to investigate the accuracy of this estimator and two examples are given to illustrate this method of estimation.  相似文献   

13.
Abstract.  We consider semiparametric models for which solution of Horvitz–Thompson or inverse probability weighted (IPW) likelihood equations with two-phase stratified samples leads to consistent and asymptotically Gaussian estimators of both Euclidean and non-parametric parameters. For Bernoulli (independent and identically distributed) sampling, standard theory shows that the Euclidean parameter estimator is asymptotically linear in the IPW influence function. By proving weak convergence of the IPW empirical process, and borrowing results on weighted bootstrap empirical processes, we derive a parallel asymptotic expansion for finite population stratified sampling. Several of our key results have been derived already for Cox regression with stratified case–cohort and more general survey designs. This paper is intended to help interpret this previous work and to pave the way towards a general Horvitz–Thompson approach to semiparametric inference with data from complex probability samples.  相似文献   

14.
Composite likelihood methods have been receiving growing interest in a number of different application areas, where the likelihood function is too cumbersome to be evaluated. In the present paper, some theoretical properties of the maximum composite likelihood estimate (MCLE) are investigated in more detail. Robustness of consistency of the MCLE is studied in a general setting, and clarified and illustrated through some simple examples. We also carry out a simulation study of the performance of the MCLE in a constructed model suggested by Arnold (2010) that is not multivariate normal, but has multivariate normal marginal distributions.  相似文献   

15.
Numerous estimation techniques for regression models have been proposed. These procedures differ in how sample information is used in the estimation procedure. The efficiency of least squares (OLS) estimators implicity assumes normally distributed residuals and is very sensitive to departures from normality, particularly to "outliers" and thick-tailed distributions. Lead absolute deviation (LAD) estimators are less sensitive to outliers and are optimal for laplace random disturbances, but not for normal errors. This paper reports monte carlo comparisons of OLS,LAD, two robust estimators discussed by huber, three partially adaptiveestimators, newey's generalized method of moments estimator, and an adaptive maximum likelihood estimator based on a normal kernal studied by manski. This paper is the first to compare the relative performance of some adaptive robust estimators (partially adaptive and adaptive procedures) with some common nonadaptive robust estimators. The partially adaptive estimators are based on three flxible parametric distributions for the errors. These include the power exponential (Box-Tiao) and generalized t distributions, as well as a distribution for the errors, which is not necessarily symmetric. The adaptive procedures are "fully iterative" rather than one step estimators. The adaptive estimators have desirable large sample properties, but these properties do not necessarily carry over to the small sample case.

The monte carlo comparisons of the alternative estimators are based on four different specifications for the error distribution: a normal, a mixture of normals (or variance-contaminated normal), a bimodal mixture of normals, and a lognormal. Five hundred samples of 50 are used. The adaptive and partially adaptive estimators perform very well relative to the other estimation procedures considered, and preliminary results suggest that in some important cases they can perform much better than OLS with 50 to 80% reductions in standard errors.

  相似文献   

16.
Numerous estimation techniques for regression models have been proposed. These procedures differ in how sample information is used in the estimation procedure. The efficiency of least squares (OLS) estimators implicity assumes normally distributed residuals and is very sensitive to departures from normality, particularly to "outliers" and thick-tailed distributions. Lead absolute deviation (LAD) estimators are less sensitive to outliers and are optimal for laplace random disturbances, but not for normal errors. This paper reports monte carlo comparisons of OLS,LAD, two robust estimators discussed by huber, three partially adaptiveestimators, newey's generalized method of moments estimator, and an adaptive maximum likelihood estimator based on a normal kernal studied by manski. This paper is the first to compare the relative performance of some adaptive robust estimators (partially adaptive and adaptive procedures) with some common nonadaptive robust estimators. The partially adaptive estimators are based on three flxible parametric distributions for the errors. These include the power exponential (Box-Tiao) and generalized t distributions, as well as a distribution for the errors, which is not necessarily symmetric. The adaptive procedures are "fully iterative" rather than one step estimators. The adaptive estimators have desirable large sample properties, but these properties do not necessarily carry over to the small sample case.

The monte carlo comparisons of the alternative estimators are based on four different specifications for the error distribution: a normal, a mixture of normals (or variance-contaminated normal), a bimodal mixture of normals, and a lognormal. Five hundred samples of 50 are used. The adaptive and partially adaptive estimators perform very well relative to the other estimation procedures considered, and preliminary results suggest that in some important cases they can perform much better than OLS with 50 to 80% reductions in standard errors.  相似文献   

17.
Survival models deal with the time until the occurrence of an event of interest. However, in some situations the event may not occur in part of the studied population. The fraction of the population that will never experience the event of interest is generally called cure rate. Models that consider this fact (cure rate models) have been extensively studied in the literature. Hypothesis testing on the parameters of these models can be performed based on likelihood ratio, gradient, score or Wald statistics. Critical values of these tests are obtained through approximations that are valid in large samples and may result in size distortion in small or moderate sample sizes. In this sense, this paper proposes bootstrap corrections to the four mentioned tests and bootstrap Bartlett correction for the likelihood ratio statistic in the Weibull promotion time model. Besides, we present an algorithm for bootstrap resampling when the data presents cure fraction and right censoring time (random and non-informative). Simulation studies are conducted to compare the finite sample performances of the corrected tests. The numerical evidence favours the corrected tests we propose. We also present an application in an actual data set.  相似文献   

18.
Covariance changes detection in multivariate time series   总被引:1,自引:0,他引:1  
This paper studies the detection of step changes in the variances and in the correlation structure of the components of a vector of time series. Two procedures based on the likelihood ratio test (LRT) statistic and on a cumulative sums (cusum) statistic are considered and compared in a simulation study. We conclude that for a single covariance change the cusum procedure is more powerful in small and medium samples, whereas the likelihood ratio test is more powerful in large samples. However, for several covariance changes the cusum procedure works clearly better. The procedures are illustrated in two real data examples.  相似文献   

19.
In recent years much effort has been devoted to maximum likelihood estimation of generalized linear mixed models. Most of the existing methods use the EM algorithm, with various techniques in handling the intractable E-step. In this paper, a new implementation of a stochastic approximation algorithm with Markov chain Monte Carlo method is investigated. The proposed algorithm is computationally straightforward and its convergence is guaranteed. A simulation and three real data sets, including the challenging salamander data, are used to illustrate the procedure and to compare it with some existing methods. The results indicate that the proposed algorithm is an attractive alternative for problems with a large number of random effects or with high dimensional intractable integrals in the likelihood function.  相似文献   

20.
In this paper, the kernel density estimator for negatively superadditive dependent random variables is studied. The exponential inequalities and the exponential rate for the kernel estimator of density function with a uniform version, over compact sets are investigated. Also, the optimal bandwidth rate of the estimator is obtained using mean integrated squared error. The results are generalized and used to improve the ones obtained for the case of associated sequences. As an application, FGM sequences that fulfil our assumptions are investigated. Also, the convergence rate of the kernel density estimator is illustrated via a simulation study. Moreover, a real data analysis is presented.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号