首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Capture-recapture methodology sensu lato has grown since 1965 at a rate of 6.6% per year. Many generalizations have appeared, together with a progressive standardization in statistical approaches. This process of growth - generalization-standardization - has fairly close parallels in the history of tools used in human health studies. Three lines of development are considered in this review: the consolidation of recent results, the perspective of broad generalizations, and problems associated with transfer of knowledge. A broader use of contingency table techniques and of likelihood ideas to increase robustness illustrates what may be expected from consolidation. In terms of generalizations, it seems we have entered an era of diversification, with great expectations in random effects models, individual covariate models and competing event models, all of which will make it possible to approach more successfully the biological questions dealing with individual variability. The transfer of knowledge has obviously been slower than in human health studies. One of the responsibilities of statisticians is to try to accelerate it.  相似文献   

2.
In applications, multivariate failure time data appears when each study subject may potentially experience several types of failures or recurrences of a certain phenomenon, or failure times may be clustered. Three types of marginal accelerated failure time models dealing with multiple events data, recurrent events data and clustered events data are considered. We propose a unified empirical likelihood inferential procedure for the three types of models based on rank estimation method. The resulting log-empirical likelihood ratios are shown to possess chi-squared limiting distributions. The properties can be applied to do tests and construct confidence regions without the need to solve the rank estimating equations nor to estimate the limiting variance-covariance matrices. The related computation is easy to implement. The proposed method is illustrated by extensive simulation studies and a real example.  相似文献   

3.
The profile likelihood function is often criticized for giving strange or unintuitive results. In the cases discussed here these are due to the use of density functions that have singularities. These singularities are naturally inherited by the profile likelihood function. It is therefore apparently important to be reminded that likelihood functions are proportional to probability functions, and so cannot have singularities. When this issue is addressed, then the profile likelihood poses no problems of this sort. This is of particular importance since the profile likelihood is a commonly used method for dealing with separate estimation of parameters.  相似文献   

4.
A log-linear modelling approach is proposed for dealing with polytomous, unordered exposure variables in case-control epidemiological studies with matched pairs. Hypotheses concerning epidemiological parameters are shown to be expressable in terms of log-linear models for the expected frequencies of the case-by-control square concordance table representation of the matched data; relevant maximum likelihood estimates and goodness-of-fit statistics are presented. Possible extensions to account for ordered categorical risk factors and multiple controls are illustrated, and comparisons with previous work are discussed. Finally, the possibility of implementing the proposed method with GLIM is illustrated within the context of a data set already analyzed by other authors.  相似文献   

5.
Summary.  The retrieval of wind vectors from satellite scatterometer observations is a non-linear inverse problem. A common approach to solving inverse problems is to adopt a Bayesian framework and to infer the posterior distribution of the parameters of interest given the observations by using a likelihood model relating the observations to the parameters, and a prior distribution over the parameters. We show how Gaussian process priors can be used efficiently with a variety of likelihood models, using local forward (observation) models and direct inverse models for the scatterometer. We present an enhanced Markov chain Monte Carlo method to sample from the resulting multimodal posterior distribution. We go on to show how the computational complexity of the inference can be controlled by using a sparse, sequential Bayes algorithm for estimation with Gaussian processes. This helps to overcome the most serious barrier to the use of probabilistic, Gaussian process methods in remote sensing inverse problems, which is the prohibitively large size of the data sets. We contrast the sampling results with the approximations that are found by using the sparse, sequential Bayes algorithm.  相似文献   

6.
The present article discusses alternative regression models and estimation methods for dealing with multivariate fractional response variables. Both conditional mean models, estimable by quasi-maximum likelihood, and fully parametric models (Dirichlet and Dirichlet-multinomial), estimable by maximum likelihood, are considered. A new parameterization is proposed for the parametric models, which accommodates the most common specifications for the conditional mean (e.g., multinomial logit, nested logit, random parameters logit, dogit). The text also discusses at some length the specification analysis of fractional regression models, proposing several tests that can be performed through artificial regressions. Finally, an extensive Monte Carlo study evaluates the finite sample properties of most of the estimators and tests considered.  相似文献   

7.
Paired binary data arise naturally when paired body parts are investigated in clinical trials. One of the widely used models for dealing with this kind of data is the equal correlation coefficients model. Before using this model, it is necessary to test whether the correlation coefficients in each group are actually equal. In this paper, three test statistics (likelihood ratio test, Wald-type test, and Score test) are derived for this purpose. The simulation results show that the Score test statistic maintains type I error rate and has satisfactory power, and therefore is recommended among the three methods. The likelihood ratio test is over conservative in most cases, and the Wald-type statistic is not robust with respect to empirical type I error. Three real examples, including a multi-centre Phase II double-blind placebo randomized controlled trial, are given to illustrate the three proposed test statistics.  相似文献   

8.
While much used in practice, latent variable models raise challenging estimation problems due to the intractability of their likelihood. Monte Carlo maximum likelihood (MCML), as proposed by Geyer & Thompson (1992 ), is a simulation-based approach to maximum likelihood approximation applicable to general latent variable models. MCML can be described as an importance sampling method in which the likelihood ratio is approximated by Monte Carlo averages of importance ratios simulated from the complete data model corresponding to an arbitrary value of the unknown parameter. This paper studies the asymptotic (in the number of observations) performance of the MCML method in the case of latent variable models with independent observations. This is in contrast with previous works on the same topic which only considered conditional convergence to the maximum likelihood estimator, for a fixed set of observations. A first important result is that when is fixed, the MCML method can only be consistent if the number of simulations grows exponentially fast with the number of observations. If on the other hand, is obtained from a consistent sequence of estimates of the unknown parameter, then the requirements on the number of simulations are shown to be much weaker.  相似文献   

9.
The introduction of software to calculate maximum likelihood estimates for mixed linear models has made likelihood estimation a practical alternative to methods based on sums of squares. Likelihood based tests and confidence intervals, however, may be misleading in problems with small sample sizes. This paper discusses an adjusted version of the directed log-likelihood statistic for mixed models that is highly accurate for testing one parameter hypotheses. Indroduced by Skovgaard (1996, Journal of the Bernoulli Society,2,145-165), we show in mixed models that the statistic has a simple conpact from that may be obtained from standard software. Simulation studies indicate that this statistic is more accurate than many of the specialized procedure that have been advocated.  相似文献   

10.
Sparsity-inducing penalties are useful tools for variable selection and are also effective for regression problems where the data are functions. We consider the problem of selecting not only variables but also decision boundaries in multiclass logistic regression models for functional data, using sparse regularization. The parameters of the functional logistic regression model are estimated in the framework of the penalized likelihood method with the sparse group lasso-type penalty, and then tuning parameters for the model are selected using the model selection criterion. The effectiveness of the proposed method is investigated through simulation studies and the analysis of a gene expression data set.  相似文献   

11.
Variable selection is an effective methodology for dealing with models with numerous covariates. We consider the methods of variable selection for semiparametric Cox proportional hazards model under the progressive Type-II censoring scheme. The Cox proportional hazards model is used to model the influence coefficients of the environmental covariates. By applying Breslow’s “least information” idea, we obtain a profile likelihood function to estimate the coefficients. Lasso-type penalized profile likelihood estimation as well as stepwise variable selection method are explored as means to find the important covariates. Numerical simulations are conducted and Veteran’s Administration Lung Cancer data are exploited to evaluate the performance of the proposed method.  相似文献   

12.
Random effects models have been playing a critical role for modelling longitudinal data. However, there are little studies on the kernel-based maximum likelihood method for semiparametric random effects models. In this paper, based on kernel and likelihood methods, we propose a pooled global maximum likelihood method for the partial linear random effects models. The pooled global maximum likelihood method employs the local approximations of the nonparametric function at a group of grid points simultaneously, instead of one point. Gaussian quadrature is used to approximate the integration of likelihood with respect to random effects. The asymptotic properties of the proposed estimators are rigorously studied. Simulation studies are conducted to demonstrate the performance of the proposed approach. We also apply the proposed method to analyse correlated medical costs in the Medical Expenditure Panel Survey data set.  相似文献   

13.
In this paper, a unified maximum marginal likelihood estimation procedure is proposed for the analysis of right censored data using general partially linear varying-coefficient transformation models (GPLVCTM), which are flexible enough to include many survival models as its special cases. Unknown functional coefficients in the models are approximated by cubic B-spline polynomial. We estimate B-spline coefficients and regression parameters by maximizing marginal likelihood function. One advantage of this procedure is that it is free of both baseline and censoring distribution. Through simulation studies and a real data application (VA data from the Veteran's Administration Lung Cancer Study Clinical Trial), we illustrate that the proposed estimation procedure is accurate, stable and practical.  相似文献   

14.
It is well known that there exist multiple roots of the likelihood equations for finite normal mixture models. Selecting a consistent root for finite normal mixture models has long been a challenging problem. Simply using the root with the largest likelihood will not work because of the spurious roots. In addition, the likelihood of normal mixture models with unequal variance is unbounded and thus its maximum likelihood estimate (MLE) is not well defined. In this paper, we propose a simple root selection method for univariate normal mixture models by incorporating the idea of goodness of fit test. Our new method inherits both the consistency properties of distance estimators and the efficiency of the MLE. The new method is simple to use and its computation can be easily done using existing R packages for mixture models. In addition, the proposed root selection method is very general and can be also applied to other univariate mixture models. We demonstrate the effectiveness of the proposed method and compare it with some other existing methods through simulation studies and a real data application.  相似文献   

15.
Ante-dependence models can be used to model the covariance structure in problems involving repeated measures through time. They are conditional regression models which generalize Gabriel’s constant-order ante-dependence model. Likelihood-based procedures are presented, together with simple expressions for likelihood ratio test statistics in terms of sum of squares from appropriate analysis of covariance. The estimation of the orders is approached as a model selection problem, and penalized likelihood criteria are suggested. Extensions of all procedures discussed here to situations with a monotone pattern of missing data are presented.  相似文献   

16.
Abstract

The regression model with ordinal outcome has been widely used in a lot of fields because of its significant effect. Moreover, predictors measured with error and multicollinearity are long-standing problems and often occur in regression analysis. However there are not many studies on dealing with measurement error models with generally ordinal response, even fewer when they suffer from multicollinearity. The purpose of this article is to estimate parameters of ordinal probit models with measurement error and multicollinearity. First, we propose to use regression calibration and refined regression calibration to estimate parameters in ordinal probit models with measurement error. Second, we develop new methods to obtain estimators of parameters in the presence of multicollinearity and measurement error in ordinal probit model. Furthermore we also extend all the methods to quadratic ordinal probit models and talk about the situation in ordinal logistic models. These estimators are consistent and asymptotically normally distributed under general conditions. They are easy to compute, perform well and are robust against the normality assumption for the predictor variables in our simulation studies. The proposed methods are applied to some real datasets.  相似文献   

17.
Finite mixture methods are applied to bird band-recovery studies to allow for heterogeneity of survival. Birds are assumed to belong to one of finitely many groups, each of which has its own survival rate (or set of survival rates varying by time and/or age). The group to which a specific animal belongs is not known, so its survival probability is a random variable from a finite mixture. Heterogeneity is thus modelled as a latent effect. This gives a wide selection of likelihood-based models, which may be compared using likelihood ratio tests. These models are discussed with reference to real and simulated data, and compared with previous models.  相似文献   

18.
Maximum likelihood is a widely used estimation method in statistics. This method is model dependent and as such is criticized as being non robust. In this article, we consider using weighted likelihood method to make robust inferences for linear mixed models where weights are determined at both the subject level and the observation level. This approach is appropriate for problems where maximum likelihood is the basic fitting technique, but a subset of data points is discrepant with the model. It allows us to reduce the impact of outliers without complicating the basic linear mixed model with normally distributed random effects and errors. The weighted likelihood estimators are shown to be robust and asymptotically normal. Our simulation study demonstrates that the weighted estimates are much better than the unweighted ones when a subset of data points is far away from the rest. Its application to the analysis of deglutition apnea duration in normal swallows shows that the differences between the weighted and unweighted estimates are due to large amount of outliers in the data set.  相似文献   

19.
Left-truncated data often arise in epidemiology and individual follow-up studies due to a biased sampling plan since subjects with shorter survival times tend to be excluded from the sample. Moreover, the survival time of recruited subjects are often subject to right censoring. In this article, a general class of semiparametric transformation models that include proportional hazards model and proportional odds model as special cases is studied for the analysis of left-truncated and right-censored data. We propose a conditional likelihood approach and develop the conditional maximum likelihood estimators (cMLE) for the regression parameters and cumulative hazard function of these models. The derived score equations for regression parameter and infinite-dimensional function suggest an iterative algorithm for cMLE. The cMLE is shown to be consistent and asymptotically normal. The limiting variances for the estimators can be consistently estimated using the inverse of negative Hessian matrix. Intensive simulation studies are conducted to investigate the performance of the cMLE. An application to the Channing House data is given to illustrate the methodology.  相似文献   

20.
The EM algorithm is often used for finding the maximum likelihood estimates in generalized linear models with incomplete data. In this article, the author presents a robust method in the framework of the maximum likelihood estimation for fitting generalized linear models when nonignorable covariates are missing. His robust approach is useful for downweighting any influential observations when estimating the model parameters. To avoid computational problems involving irreducibly high‐dimensional integrals, he adopts a Metropolis‐Hastings algorithm based on a Markov chain sampling method. He carries out simulations to investigate the behaviour of the robust estimates in the presence of outliers and missing covariates; furthermore, he compares these estimates to the classical maximum likelihood estimates. Finally, he illustrates his approach using data on the occurrence of delirium in patients operated on for abdominal aortic aneurysm.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号