首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
One of the basic parameters in survival analysis is the mean residual life M 0. For right censored observation, the usual empirical likelihood based log-likelihood ratio leads to a scaled c12{\chi_1^2} limit distribution and estimating the scaled parameter leads to lower coverage of the corresponding confidence interval. To solve the problem, we present a log-likelihood ratio l(M 0) by methods of Murphy and van der Vaart (Ann Stat 1471–1509, 1997). The limit distribution of l(M 0) is the standard c12{\chi_1^2} distribution. Based on the limit distribution of l(M 0), the corresponding confidence interval of M 0 is constructed. Since the proof of the limit distribution does not offer a computational method for the maximization of the log-likelihood ratio, an EM algorithm is proposed. Simulation studies support the theoretical result.  相似文献   

2.
A two-stage procedure is studied for estimating changes in the parameters of the multi-parameter exponential family, given a sample X 1,…,X n. The first step is a likelihood ratio test of the hypothesis Hoof no change. Upon rejection of this hypothesis, the change point index and pre- and post-change parameters are estimated by maximum likelihood. The asymptotic (n → ∞) distribution of the log-likelihood ratio statistic is obtained under both Hoand local alternatives. The m.l.e.fs o of the pre- and post-change parameters are shown to be asymptotically jointly normal. The distribution of the change point estimate is obtained under local alternatives. Performance of the procedure for moderate samples is studied by Monte Carlo methods.  相似文献   

3.

Let Y be a response and, given covariate X,Y has a conditional density f(y | x, θ), where θ is a unknown p-dimensional vector of parameters and the marginal distribution of X is unknown. When responses are missing at random, with auxiliary information and imputation, we define an adjusted empirical log-likelihood ratio for the mean of Y and obtain its asymptotic distribution. A simulation study is conducted to compare the adjusted empirical log-likelihood and the normal approximation method in terms of coverage accuracies.  相似文献   

4.
In a recent volume of this journal, Holden [Testing the normality assumption in the Tobit Model, J. Appl. Stat. 31 (2004) pp. 521–532] presents Monte Carlo evidence comparing several tests for departures from normality in the Tobit Model. This study adds to the work of Holden by considering another test, and several information criteria, for detecting departures from normality in the Tobit Model. The test given here is a modified likelihood ratio statistic based on a partially adaptive estimator of the Censored Regression Model using the approach of Caudill [A partially adaptive estimator for the Censored Regression Model based on a mixture of normal distributions, Working Paper, Department of Economics, Auburn University, 2007]. The information criteria examined include the Akaike’s Information Criterion (AIC), the Consistent AIC (CAIC), the Bayesian information criterion (BIC), and the Akaike’s BIC (ABIC). In terms of fewest ‘rejections’ of a true null, the best performance is exhibited by the CAIC and the BIC, although, like some of the statistics examined by Holden, there are computational difficulties with each.  相似文献   

5.
Cubic B-splines are used to estimate the nonparametric component of a semiparametric generalized linear model. A penalized log-likelihood ratio test statistic is constructed for the null hypothesis of the linearity of the nonparametric function. When the number of knots is fixed, its limiting null distribution is the distribution of a linear combination of independent chi-squared random variables, each with one df. The smoothing parameter is determined by giving a specified value for its asymptotically expected value under the null hypothesis. A simulation study is conducted to evaluate its power performance; a real-life dataset is used to illustrate its practical use.  相似文献   

6.
Empirical likelihood ratio confidence regions based on the chi-square calibration suffer from an undercoverage problem in that their actual coverage levels tend to be lower than the nominal levels. The finite sample distribution of the empirical log-likelihood ratio is recognized to have a mixture structure with a continuous component on [0, + ∞) and a point mass at + ∞. The undercoverage problem of the Chi-square calibration is partly due to its use of the continuous Chi-square distribution to approximate the mixture distribution of the empirical log-likelihood ratio. In this article, we propose two new methods of calibration which will take advantage of the mixture structure; we construct two new mixture distributions by using the F and chi-square distributions and use these to approximate the mixture distributions of the empirical log-likelihood ratio. The new methods of calibration are asymptotically equivalent to the chi-square calibration. But the new methods, in particular the F mixture based method, can be substantially more accurate than the chi-square calibration for small and moderately large sample sizes. The new methods are also as easy to use as the chi-square calibration.  相似文献   

7.
ABSTRACT

In this paper, we investigate the consistency of the Expectation Maximization (EM) algorithm-based information criteria for model selection with missing data. The criteria correspond to a penalization of the conditional expectation of the complete data log-likelihood given the observed data and with respect to the missing data conditional density. We present asymptotic properties related to maximum likelihood estimation in the presence of incomplete data and we provide sufficient conditions for the consistency of model selection by minimizing the information criteria. Their finite sample performance is illustrated through simulation and real data studies.  相似文献   

8.
In this paper, we apply the empirical likelihood method to heteroscedastic partially linear errors-in-variables model. For the cases of known and unknown error variances, the two different empirical log-likelihood ratios for the parameter of interest are constructed. If the error variances are known, the empirical log-likelihood ratio is proved to be asymptotic chi-square distribution under the assumption that the errors are given by a sequence of stationary α-mixing random variables. Furthermore, if the error variances are unknown, we show that the proposed statistic is asymptotically standard chi-square distribution when the errors are independent. Simulations are carried out to assess the performance of the proposed method.  相似文献   

9.
Asymptotic behavior of a log-likelihood ratio statistic for testing a change in a three parameter Weibull distribution is studied. It is shown that if a shape parameter α>2α>2 the law of iterated logarithm for maximum-likelihood estimators is still valid and the log-likelihood testing statistic is asymptotically distributed (after an appropriate normalization) according to a Gumbel distribution.  相似文献   

10.
ABSTRACT

In this article, partially non linear models when the response variable is measured with error and explanatory variables are measured exactly are considered. Without specifying any error structure equation, a semiparametric dimension reduction technique is employed. Two estimators of unknown parameter in non linear function are obtained and asymptotic normality is proved. In addition, empirical likelihood method for parameter vector is provided. It is shown that the estimated empirical log-likelihood ratio has asymptotic Chi-square distribution. A simulation study indicates that, compared with normal approximation method, empirical likelihood method performs better in terms of coverage probabilities and average length of the confidence intervals.  相似文献   

11.
Summary Modified formulas for the Wald and Lagrangian multiplier statistics are introduced and considered together with the likelihood ratio statistics for testing a typical null hypothesisH 0 stated in terms of equality constraints. It is demonstrated, subject to known standard regularity conditions, that each of these statistics and the known Wald statistic has the asymptotic chi-square distribution with degrees of freedom equal to the number of equality constraints specified byH 0 whether the information matrix is singular or nonsingular. The results of this paper include a generalization of the results of Sively (1959) concerning the equivalence of the Wald, Lagrange multiplier and likelihood ratio tests to the case of singular information matrices.  相似文献   

12.
In discriminant analysis, the dimension of the hyperplane which population mean vectors span is called the dimensionality. The procedures commonly used to estimate this dimension involve testing a sequence of dimensionality hypotheses as well as model fitting approaches based on (consistent) Akaike's method, (modified) Mallows' method and Schwarz's method. The marginal log-likelihood (MLL) method is developed and the asymptotic distribution of the dimensionality estimated by this method for normal populations is derived. Furthermore a modified marginal log-likelihood (MMLL) method is also considered. The MLL method is not consistent for large samples and two modified criteria are proposed which attain asymptotic consistency. Some comments are made with regard to the robustness of this method to departures from normality. The operating characteristics of the various methods proposed are examined and compared.  相似文献   

13.
Consider the case of classifying an incoming message as one of two known p-dimension signals or as a pure noise. Let the noise co-variance matrix (assumed to be same in all the three cases) be unknown. We consider the problem of estimation of “realized signal to noise ratio matrix”, which is an index of discriminatory power, under various loss functions. Optimum estimators are obtained under these loss functions. Finally, an attempt is made to provide a lower confidence bound for the realized signal to noise ratio matrix. In the process, the probability distribution of the smaller eigenvalue of a 2 × 2 confluent hypergeometric random matrix is obtained.  相似文献   

14.
A criterion for choosing an estimator in a family of semi-parametric estimators from incomplete data is proposed. This criterion is the expected observed log-likelihood (ELL). Adapted versions of this criterion in case of censored data and in presence of explanatory variables are exhibited. We show that likelihood cross-validation (LCV) is an estimator of ELL and we exhibit three bootstrap estimators. A simulation study considering both families of kernel and penalized likelihood estimators of the hazard function (indexed on a smoothing parameter) demonstrates good results of LCV and a bootstrap estimator called ELLbboot . We apply the ELLbboot criterion to compare the kernel and penalized likelihood estimators to estimate the risk of developing dementia for women using data from a large cohort study.  相似文献   

15.
When there are many explanatory variables in the regression model, there is a chance that some of these are intercorrelated. This is where the problem of multicollinearity creeps in due to which precision and accuracy of the coefficients is marred, and the quest to find the best model becomes tedious. To tackle such a situation, Model selection criteria are applied for selecting the best model that fits the data. Current study focuses on the evaluation of the four unmodified and four modified versions of generalized information criteria—Akaike Information Criterion, Schwarz's Bayes Information Criteria, Hannan-Quinn Information Criterion, and Akaike Information Criterion corrected for small samples. A simulation study using SAS software was carried out in order to compare the unmodified and modified versions of the generalized information criteria and to discover the best version amongst the four modified model selection criteria, for identifying the best model, when the collinearity assumption is violated. For the proposed simulation, two samples of size 50 and 100, for three explanatory variables X1, X2, and X3, are drawn from Normal distribution. Two situations of collinearity violations between X1 and X2 are looked into, first when ρ = 0.6 and second when ρ = 0.8. The outcomes of the simulations are displayed in the tables along with visual representations. The results revealed that modified versions of the generalized information criteria are more sensitive in identifying models marred with high multicollinearity as compared to the unmodified generalized information criteria.  相似文献   

16.
A multivariate modified histogram density estimate depending on a reference density g and a partition P has been proved to have good consistency properties according to several information theoretic criteria. Given an i.i.d. sample, we show how to select automatically both g and P so that the expected L 1 error of the corresponding selected estimate is within a given constant multiple of the best possible error plus an additive term which tends to zero under mild assumptions. Our method is inspired by the combinatorial tools developed by Devroye and Lugosi [Devroye, L. and Lugosi, G., 2001, Combinatorial Methods in Density Estimation (New York, NY: Springer–Verlag)] and it includes a wide range of reference density and partition models. Results of simulations are also presented.  相似文献   

17.
A cumulative sum control chart for multivariate Poisson distribution (MP-CUSUM) is proposed. The MP-CUSUM chart is constructed based on log-likelihood ratios with in-control parameters, Θ0, and shifts to be detected quickly, Θ1. The average run length (ARL) values are obtained using a Markov Chain-based method. Numerical experiments show that the MP-CUSUM chart is effective in detecting parameter shifts in terms of ARL. The MP-CUSUM chart with smaller Θ1 is more sensitive than that with greater Θ1 to smaller shifts, but more insensitive to greater shifts. A comparison shows that the proposed MP-CUSUM chart outperforms an existing MP chart.  相似文献   

18.
This paper derives Akaike information criterion (AIC), corrected AIC, the Bayesian information criterion (BIC) and Hannan and Quinn’s information criterion for approximate factor models assuming a large number of cross-sectional observations and studies the consistency properties of these information criteria. It also reports extensive simulation results comparing the performance of the extant and new procedures for the selection of the number of factors. The simulation results show the di?culty of determining which criterion performs best. In practice, it is advisable to consider several criteria at the same time, especially Hannan and Quinn’s information criterion, Bai and Ng’s ICp2 and BIC3, and Onatski’s and Ahn and Horenstein’s eigenvalue-based criteria. The model-selection criteria considered in this paper are also applied to Stock and Watson’s two macroeconomic data sets. The results differ considerably depending on the model-selection criterion in use, but evidence suggesting five factors for the first data and five to seven factors for the second data is obtainable.  相似文献   

19.
Recently, van der Linde (Comput. Stat. Data Anal. 53:517–533, 2008) proposed a variational algorithm to obtain approximate Bayesian inference in functional principal components analysis (FPCA), where the functions were observed with Gaussian noise. Generalized FPCA under different noise models with sparse longitudinal data was developed by Hall et al. (J. R. Stat. Soc. B 70:703–723, 2008), but no Bayesian approach is available yet. It is demonstrated that an adapted version of the variational algorithm can be applied to obtain a Bayesian FPCA for canonical parameter functions, particularly log-intensity functions given Poisson count data or logit-probability functions given binary observations. To this end a second order Taylor expansion of the log-likelihood, that is, a working Gaussian distribution and hence another step of approximation, is used. Although the approach is conceptually straightforward, difficulties can arise in practical applications depending on the accuracy of the approximation and the information in the data. A modified algorithm is introduced generally for one-parameter exponential families and exemplified for binary and count data. Conditions for its successful application are discussed and illustrated using simulated data sets. Also an application with real data is presented.  相似文献   

20.
Morteza Amini 《Statistics》2013,47(5):393-405
In a sequence of bivariate random variables {(X i , Y i ), i≥1} from a continuous distribution with a real parameter θ, general comparison results between the amount of Fisher information about θ contained in the sequence of the first n records and their concomitants, and the desired information in an i.i.d. sample of size n from the parent distribution are established. Some relationships between reliability properties and the proposed criteria are obtained in situations in which the univariate counterpart of the underlying bivariate family belongs to location, scale or shape families. It is also shown that in some classes of bivariate families, the concerned information property is equivalent to that of its univariate counterpart. The proposed procedure is illustrated by considering several examples.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号