首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The present study investigates the performance of fice discrimination methods for data consisting of a mixture of continuous and binary variables. The methods are Fisher’s linear discrimination, logistic discrimination, quadratic discrimination, a kernal model and an independence model. Six-dimensional data, consisting of three binary and three continuous variables, are simulated according to a location model. The results show an almost identical performance for Fisher’s linear discrimination and logistic discrimination. Only in situations with independently distributed variables the independence model does have a reasonable discriminatory ability for the dimensionality considered. If the log likelihood ratio is non-linear ratio is non-linear with respect to its continuous and binary part, the quadratic discrimination method is substantial better than linear and logistic discrimination, followed by the kernel method. A very good performance is obtained when in every situation the better one of linear and quardratic discrimination is used.  相似文献   

2.
The authors study the empirical likelihood method for linear regression models. They show that when missing responses are imputed using least squares predictors, the empirical log‐likelihood ratio is asymptotically a weighted sum of chi‐square variables with unknown weights. They obtain an adjusted empirical log‐likelihood ratio which is asymptotically standard chi‐square and hence can be used to construct confidence regions. They also obtain a bootstrap empirical log‐likelihood ratio and use its distribution to approximate that of the empirical log‐likelihood ratio. A simulation study indicates that the proposed methods are comparable in terms of coverage probabilities and average lengths of confidence intervals, and perform better than a normal approximation based method.  相似文献   

3.
Inference for a scalar parameter in the pressence of nuisance parameters requires high dimensional integrations of the joint density of the pivotal quantities. Recent development in asymptotic methods provides accurate approximations for significance levels and thus confidence intervals for a scalar component parameter. In this paper, a simple, efficient and accurate numerical procedure is first developed for the location model and is then extended to the location-scale model and the linear regression model. This numerical procedure only requires a fine tabulation of the parameter and the observed log likelihood function, which can be either the full, marginal or conditional observed log likelihood function, as input and output is the corresponding significance function. Numerical results showed that this approximation is not only simple but also very accurate. It outperformed the usual approximations such as the signed likelihood ratio statistic, the maximum likelihood estimate and the score statistic.  相似文献   

4.
The asymptotic distributions of two tests for sphericity:the locally most powerful invariant test and the likelihood ratio test are derived under the general alternaties ∑?σ2 I. The powers of these two tests are then compared when the data are from a trivariate normal population. The bootstrap method is also used to obtain the powers and the powers obtained by this method agree with those from the asymptotic distributions.  相似文献   

5.
Inverse sampling is widely applied in studies with dichotomous outcomes, especially when the subjects arrive sequentially or the response of interest is difficult to obtain. In this paper, we investigate the rate ratio test problem under inverse sampling based on gradient statistic with the asymptotic method and parametric bootstrap technique. The gradient statistic has many advantages, for example, it is simple to calculate and competitive with Wald-type, score and likelihood ratio tests in terms of local power. Numerical studies are carried out to evaluate the performance of our gradient test and the existing tests, namely Wald-type, score and likelihood ratio tests. The simulation results suggest that the gradient test based on the parametric bootstrap method has excellent type I error control and large powers even in small sample design. Two real examples, from a heart disease study and a drug comparison study, are applied to illustrate our methods.  相似文献   

6.
This paper characterizes the asymptotic behaviour of the likelihood ratio test statistic (LRTS) for testing homogeneity (i.e. no mixture) against gamma mixture alternatives. Under the null hypothesis, the LRTS is shown to be asymptotically equivalent to the square of Davies's Gaussian process test statistic and diverges at a log n rate to infinity in probability. Based on the asymptotic analysis, we propose and demonstrate a computationally efficient method to simulate the null distributions of the LRTS for small to moderate sample sizes.  相似文献   

7.
Effective implementation of likelihood inference in models for high‐dimensional data often requires a simplified treatment of nuisance parameters, with these having to be replaced by handy estimates. In addition, the likelihood function may have been simplified by means of a partial specification of the model, as is the case when composite likelihood is used. In such circumstances tests and confidence regions for the parameter of interest may be constructed using Wald type and score type statistics, defined so as to account for nuisance parameter estimation or partial specification of the likelihood. In this paper a general analytical expression for the required asymptotic covariance matrices is derived, and suggestions for obtaining Monte Carlo approximations are presented. The same matrices are involved in a rescaling adjustment of the log likelihood ratio type statistic that we propose. This adjustment restores the usual chi‐squared asymptotic distribution, which is generally invalid after the simplifications considered. The practical implication is that, for a wide variety of likelihoods and nuisance parameter estimates, confidence regions for the parameters of interest are readily computable from the rescaled log likelihood ratio type statistic as well as from the Wald type and score type statistics. Two examples, a measurement error model with full likelihood and a spatial correlation model with pairwise likelihood, illustrate and compare the procedures. Wald type and score type statistics may give rise to confidence regions with unsatisfactory shape in small and moderate samples. In addition to having satisfactory shape, regions based on the rescaled log likelihood ratio type statistic show empirical coverage in reasonable agreement with nominal confidence levels.  相似文献   

8.
Several methods for testing the difference between two group means of k independent populations are compared. Simulation shows that the likelihood ratio test with the Bartlett correction factor and the t test with appropriate degrees of freedom perform better, particularly when the sample size is small. However, the latter is very good for all configurations.  相似文献   

9.
We compare the commonly used two-step methods and joint likelihood method for joint models of longitudinal and survival data via extensive simulations. The longitudinal models include LME, GLMM, and NLME models, and the survival models include Cox models and AFT models. We find that the full likelihood method outperforms the two-step methods for various joint models, but it can be computationally challenging when the dimension of the random effects in the longitudinal model is not small. We thus propose an approximate joint likelihood method which is computationally efficient. We find that the proposed approximation method performs well in the joint model context, and it performs better for more “continuous” longitudinal data. Finally, a real AIDS data example shows that patients with higher initial viral load or lower initial CD4 are more likely to drop out earlier during an anti-HIV treatment.  相似文献   

10.
As new diagnostic tests are developed and marketed, it is very important to be able to compare the accuracy of a given two continuous‐scale diagnostic tests. An effective method to evaluate the difference between the diagnostic accuracy of two tests is to compare partial areas under the receiver operating characteristic curves (AUCs). In this paper, we review existing parametric methods. Then, we propose a new semiparametric method and a new nonparametric method to investigate the difference between two partial AUCs. For the difference between two partial AUCs under each method, we derive a normal approximation, define an empirical log‐likelihood ratio, and show that the empirical log‐likelihood ratio follows a scaled chi‐square distribution. We construct five confidence intervals for the difference based on normal approximation, bootstrap, and empirical likelihood methods. Finally, extensive simulation studies are conducted to compare the finite‐sample performances of these intervals, and a real example is used as an application of our recommended intervals. The simulation results indicate that the proposed hybrid bootstrap and empirical likelihood intervals outperform other existing intervals in most cases.  相似文献   

11.
ABSTRACT

A frequently encountered statistical problem is to determine if the variability among k populations is heterogeneous. If the populations are measured using different scales, comparing variances may not be appropriate. In this case, comparing coefficient of variation (CV) can be used because CV is unitless. In this paper, a non-parametric test is introduced to test whether the CVs from k populations are different. With the assumption that the populations are independent normally distributed, the Miller test, Feltz and Miller test, saddlepoint-based test, log likelihood ratio test and the proposed simulated Bartlett-corrected log likelihood ratio test are derived. Simulation results show the extreme accuracy of the simulated Bartlett-corrected log likelihood ratio test if the model is correctly specified. If the model is mis-specified and the sample size is small, the proposed test still gives good results. However, with a mis-specified model and large sample size, the non-parametric test is recommended.  相似文献   

12.
The exact confidence region for log relative potency resulting from likelihood score methods (Williams (1988) An exact confidence interval for the relative potency estimated from a multivariate bioassay, Biometrics, 44:861-868) will very likely consist of two disjoint confidence intervals. The two methods proposed by Williams which aim to select just one (the same) confidence interval from the confidence region are nearly – but not completely – consistent. The likelihood score interval and likelihood ratio interval are asymptotically equivalent. Williams's very strong claim concerning the confidence coefficient in the second selection method is still theoretically unproved; yet, simulations show that it is true for a wide range of practical experimental situations.  相似文献   

13.
We suggest locally parametric methods for estimating curves, such as boundaries of density supports or fault lines in response surfaces, in a variety of spatial problems. The methods are based on spatial approximations to the local likelihood that the curve passes through a given point in the plane, as a function of that point. The local likelihood might be a regular likelihood computed locally, with kernel weights (e.g. in the case of support boundary estimation) or a local version of a likelihood ratio statistic (e.g. in fault line estimation). In either case, the local likelihood surface represents a function which is relatively large near the target curve, and relatively small elsewhere. Therefore, the curve may be estimated as a ridge line of the surface; we require only a numerical algorithm for tracking the projection of a ridge into the plane. This approach offers several potential advantages over alternative methods. First, the local (log-)likelihood surface can be graphed, and the degree of 'ridginess' assessed visually, to determine how the level of local smoothing should be varied in different spatial locations in order to emphasize the ridge and hence the curve adequately. Secondly, the local likelihood surface does not need to be computed in anything like its entirety; once we have a reasonable approximation to a point on the curve we may track it by numerically 'walking along' the ridge line. Thirdly, the method is appropriate without change for many different types of spatial explanatory variables—gridded, stochastic or otherwise. Three examples are explored in detail; fault lines in response surfaces and in intensity or density surfaces, and boundaries of supports of probability densities.  相似文献   

14.
The authors consider the empirical likelihood method for the regression model of mean quality‐adjusted lifetime with right censoring. They show that an empirical log‐likelihood ratio for the vector of the regression parameters is asymptotically a weighted sum of independent chi‐squared random variables. They adjust this empirical log‐likelihood ratio so that the limiting distribution is a standard chi‐square and construct corresponding confidence regions. Simulation studies lead them to conclude that empirical likelihood methods outperform the normal approximation methods in terms of coverage probability. They illustrate their methods with a data example from a breast cancer clinical trial study.  相似文献   

15.
Changepoint Analysis as a Method for Isotonic Inference   总被引:1,自引:0,他引:1  
Concavity and sigmoidicity hypotheses are developed as a natural extension of the simple ordered hypothesis in normal means. Those hypotheses give reasonable shape constraints for obtaining a smooth response curve in the non-parametric inputoutput analysis. The slope change and inflection point models are introduced correspondingly as the corners of the polyhedral cones defined by those isotonic hypotheses. Then a maximal contrast type test is derived systematically as the likelihood ratio test for each of those changepoint hypotheses. The test is also justified for the original isotonic hypothesis by a complete class lemma. The component variables of the resulting test statistic have second or third order Markov property which, together with an appropriate non-linear transformation, leads to an exact and very efficient algorithm for the probability calculation. Some considerations on the power of the test are given showing this to be a very promising way of approaching to the isotonic inference.  相似文献   

16.
This article considers Robins's marginal and nested structural models in the cross‐sectional setting and develops likelihood and regression estimators. First, a nonparametric likelihood method is proposed by retaining a finite subset of all inherent and modelling constraints on the joint distributions of potential outcomes and covariates under a correctly specified propensity score model. A profile likelihood is derived by maximizing the nonparametric likelihood over these joint distributions subject to the retained constraints. The maximum likelihood estimator is intrinsically efficient based on the retained constraints and weakly locally efficient. Second, two regression estimators, named hat and tilde, are derived as first‐order approximations to the likelihood estimator under the propensity score model. The tilde regression estimator is intrinsically and weakly locally efficient and doubly robust. The methods are illustrated by data analysis for an observational study on right heart catheterization. The Canadian Journal of Statistics 38: 609–632; 2010 © 2010 Statistical Society of Canada  相似文献   

17.
Shibin Zhang  Xuming He 《Statistics》2016,50(3):667-688
Probability transform-based inference, for example, characteristic function-based inference, is a good alternative to likelihood methods when the probability density function is unavailable or intractable. However, a set of grids needs to be determined to provide an effective estimator based on probability transforms. This paper is concerned with parametric inference based on adaptive selection of grids. By employing a closeness measure to evaluate the asymptotic variance of the transform-based estimator, we propose a statistical inference procedure, accompanied with adaptive grid selection. The selection algorithm aims for a small set of grids, and yet the resulting estimator can be highly efficient. Generally, the asymptotic variance is very close to that of the maximum likelihood estimator.  相似文献   

18.
A new method for estimating the proportion of null effects is proposed for solving large-scale multiple comparison problems. It utilises maximum likelihood estimation of nonparametric mixtures, which also provides a density estimate of the test statistics. It overcomes the problem of the usual nonparametric maximum likelihood estimator that cannot produce a positive probability at the location of null effects in the process of estimating nonparametrically a mixing distribution. The profile likelihood is further used to help produce a range of null proportion values, corresponding to which the density estimates are all consistent. With a proper choice of a threshold function on the profile likelihood ratio, the upper endpoint of this range can be shown to be a consistent estimator of the null proportion. Numerical studies show that the proposed method has an apparently convergent trend in all cases studied and performs favourably when compared with existing methods in the literature.  相似文献   

19.
Sample entropy based tests, methods of sieves and Grenander estimation type procedures are known to be very efficient tools for assessing normality of underlying data distributions, in one-dimensional nonparametric settings. Recently, it has been shown that the density based empirical likelihood (EL) concept extends and standardizes these methods, presenting a powerful approach for approximating optimal parametric likelihood ratio test statistics, in a distribution-free manner. In this paper, we discuss difficulties related to constructing density based EL ratio techniques for testing bivariate normality and propose a solution regarding this problem. Toward this end, a novel bivariate sample entropy expression is derived and shown to satisfy the known concept related to bivariate histogram density estimations. Monte Carlo results show that the new density based EL ratio tests for bivariate normality behave very well for finite sample sizes. To exemplify the excellent applicability of the proposed approach, we demonstrate a real data example.  相似文献   

20.
Regression analyses are commonly performed with doubly limited continuous dependent variables; for instance, when modeling the behavior of rates, proportions and income concentration indices. Several models are available in the literature for use with such variables, one of them being the unit gamma regression model. In all such models, parameter estimation is typically performed using the maximum likelihood method and testing inferences on the model''s parameters are usually based on the likelihood ratio test. Such a test can, however, deliver quite imprecise inferences when the sample size is small. In this paper, we propose two modified likelihood ratio test statistics for use with the unit gamma regressions that deliver much more accurate inferences when the number of data points in small. Numerical (i.e. simulation) evidence is presented for both fixed dispersion and varying dispersion models, and also for tests that involve nonnested models. We also present and discuss two empirical applications.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号