首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
An outcome-dependent sampling (ODS) design is a retrospective sampling scheme where one observes the primary exposure variables with a probability that depends on the observed value of the outcome variable. When the outcome of interest is failure time, the observed data are often censored. By allowing the selection of the supplemental samples depends on whether the event of interest happens or not and oversampling subjects from the most informative regions, ODS design for the time-to-event data can reduce the cost of the study and improve the efficiency. We review recent progresses and advances in research on ODS designs with failure time data. This includes researches on ODS related designs like case–cohort design, generalized case–cohort design, stratified case–cohort design, general failure-time ODS design, length-biased sampling design and interval sampling design.  相似文献   

2.
Zhou  Qingning  Cai  Jianwen  Zhou  Haibo 《Lifetime data analysis》2020,26(1):85-108
Lifetime Data Analysis - We propose a two-stage outcome-dependent sampling design and inference procedure for studies that concern interval-censored failure time outcomes. This design enhances the...  相似文献   

3.
When data are outcome-dependent non response, pseudo-likelihood yields consistent regression coefficients without specifying the missing data mechanism. However, it is onerous to derive parameter estimators including their standard errors from the regression coefficients under pseudo-likelihood (PL). The present study applies an imputation method to compute the asymptotic standard errors of parameter estimators. The proposed method is simpler than Delta method and it showed similar effect size of the standard errors to bootstrapping in simulation and application studies.  相似文献   

4.
Two-phase sampling is a cost-effective method of data collection using outcome-dependent sampling for the second-phase sample. In order to make efficient use of auxiliary information and to improve domain estimation, mass imputation can be used in two-phase sampling. Rao and Sitter (1995) introduce mass imputation for two-phase sampling and its variance estimation under simple random sampling in both phases. In this paper, we extend the Rao–Sitter method to general sampling design. The proposed method is further extended to mass imputation for categorical data. A limited simulation study is performed to examine the performance of the proposed methods.  相似文献   

5.
The problems that arise when using the likelihood ratio test for the identification of a mixture distribution are well known: non-identifiability of the parameters and null hypothesis corresponding to a boundary point of the parameter space. In their approach to the problem of testing homogeneity against a mixture with two components, Ghosh and Sen took into account these specific problems. Under general assumptions, they obtained the asymptotic distribution of the likelihood ratio test statistic. However, their result requires a separation condition which is not completely satisfactory. We show that it is possible to remove this condition with assumptions which involve the second derivatives of the density only.  相似文献   

6.
The established general results on convergence properties of the EM algorithm require the sequence of EM parameter estimates to fall in the interior of the parameter space over which the likelihood is being maximized. This paper presents convergence properties of the EM sequence of likelihood values and parameter estimates in constrained parameter spaces for which the sequence of EM parameter estimates may converge to the boundary of the constrained parameter space contained in the interior of the unconstrained parameter space. Examples of the behavior of the EM algorithm applied to such parameter spaces are presented.  相似文献   

7.
The authors explore likelihood‐based methods for making inferences about the components of variance in a general normal mixed linear model. In particular, they use local asymptotic approximations to construct confidence intervals for the components of variance when the components are close to the boundary of the parameter space. In the process, they explore the question of how to profile the restricted likelihood (REML). Also, they show that general REML estimates are less likely to fall on the boundary of the parameter space than maximum‐likelihood estimates and that the likelihood‐ratio test based on the local asymptotic approximation has higher power than the likelihood‐ratio test based on the usual chi‐squared approximation. They examine the finite‐sample properties of the proposed intervals by means of a simulation study.  相似文献   

8.
The problem of clustering individuals is considered within the context of a mixture of distributions. A modification of the usual approach to population mixtures is employed. As usual, a parametric family of distributions is considered, a set of parameter values being associated with each population. In addition, with each observation is associated an identification parameter, Indicating from which population the observation arose. Theresulting likelihood function is interpreted in terms of the conditional probability density of a sample from a mixture of populations, given the identification parameter of each observation. Clustering algorithms are obtained by applying a method of iterated maximum likelihood to this like-lihood function.  相似文献   

9.
This article develops empirical likelihood for threshold autoregressive models. We propose general estimating equations based on moment constraint. Under some suitable conditions, we show the empirical likelihood estimators for parameter are asymptotically normally distributed, and the proposed log empirical likelihood ratio statistic asymptotically follows a standard chi-squared distribution.  相似文献   

10.
Three forms of a general null hypothesis Ho on the factorial parameters of a general asymmetrical factorial paired comparison experiment are considered. A class of partially balanced designscorresponding to each form of H0 is constructed and the A,D and ioptimal design, minimizing the trace, determinant and largest eigenvalue of a defined covariance matrix of related maximumlikelihoodestimators, in that class is determined. Moreover, the optimal design in each class maximizes the noncentrality parameter λ2 of the asymptotic noncentral chi-square distribution of the likelihood ratiostatistic -2 log λ for testing Ho under defined local alternatives. These results apply directly to symmetrical factorial paired comparison experiments as special casesExamples are given forillustrating applications of the developed results  相似文献   

11.
A common problem for longitudinal data analyses is that subjects follow-up is irregular, often related to the past outcome or other factors associated with the outcome measure that are not included in the regression model. Analyses unadjusted for outcome-dependent follow-up yield biased estimates. We propose a longitudinal data analysis that can provide consistent estimates in regression models that are subject to outcome-dependent follow-up. We focus on semiparametric marginal log-link regression with arbitrary unspecified baseline function. Based on estimating equations, the proposed class of estimators are root n consistent and asymptotically normal. We present simulation studies that assess the performance of the estimators under finite samples. We illustrate our approach using data from a health services research study.  相似文献   

12.
The non-Gaussian maximum likelihood estimator is frequently used in GARCH models with the intention of capturing heavy-tailed returns. However, unless the parametric likelihood family contains the true likelihood, the estimator is inconsistent due to density misspecification. To correct this bias, we identify an unknown scale parameter ηf that is critical to the identification for consistency and propose a three-step quasi-maximum likelihood procedure with non-Gaussian likelihood functions. This novel approach is consistent and asymptotically normal under weak moment conditions. Moreover, it achieves better efficiency than the Gaussian alternative, particularly when the innovation error has heavy tails. We also summarize and compare the values of the scale parameter and the asymptotic efficiency for estimators based on different choices of likelihood functions with an increasing level of heaviness in the innovation tails. Numerical studies confirm the advantages of the proposed approach.  相似文献   

13.
This paper studies four methods for estimating the Box-Cox parameter used to transform data to normality. Three of these are based on optimizing test statistics for standard normality tests (the Shapiro-Wilk. skewness, and kurtosis tests); the fourth uses the maximum likelihood estimator of the Box-Cox parameter. The four methods are compared and evaluated with a simulation study, where their performances under different skewness and kurtosis conditions are analyzed. The estimator based on optimizing the Shapiro-Wilk statistic generally gives rise to the best transformations, while the maximum likelihood estimator performs almost as well. Estimators based on optimizing skewness and kurtosis do not perform well in general.  相似文献   

14.
Effective implementation of likelihood inference in models for high‐dimensional data often requires a simplified treatment of nuisance parameters, with these having to be replaced by handy estimates. In addition, the likelihood function may have been simplified by means of a partial specification of the model, as is the case when composite likelihood is used. In such circumstances tests and confidence regions for the parameter of interest may be constructed using Wald type and score type statistics, defined so as to account for nuisance parameter estimation or partial specification of the likelihood. In this paper a general analytical expression for the required asymptotic covariance matrices is derived, and suggestions for obtaining Monte Carlo approximations are presented. The same matrices are involved in a rescaling adjustment of the log likelihood ratio type statistic that we propose. This adjustment restores the usual chi‐squared asymptotic distribution, which is generally invalid after the simplifications considered. The practical implication is that, for a wide variety of likelihoods and nuisance parameter estimates, confidence regions for the parameters of interest are readily computable from the rescaled log likelihood ratio type statistic as well as from the Wald type and score type statistics. Two examples, a measurement error model with full likelihood and a spatial correlation model with pairwise likelihood, illustrate and compare the procedures. Wald type and score type statistics may give rise to confidence regions with unsatisfactory shape in small and moderate samples. In addition to having satisfactory shape, regions based on the rescaled log likelihood ratio type statistic show empirical coverage in reasonable agreement with nominal confidence levels.  相似文献   

15.
In a model of equioverlapping samples maximum likelihood estimation of a Poisson parameter is examined and compared with two linear unbiased estimations by mean squared error. Since a likelihood estimator is not explicitly available in general, a simulation study has been performed and the results are illustrated  相似文献   

16.
Consider the problem of estimating the common location parameter of two exponential populations using record data when the scale parameters are unknown. We derive the maximum likelihood estimator (MLE), the modified maximum likelihood estimator (MMLE) and the uniformly minimum variance unbiased estimator (UMVUE) of the common location parameter. Further, we derive a general result for inadmissibility of an equivariant estimator under the scaled-squared error loss function. Using this result, we conclude that the MLE and the UMVUE are inadmissible and better estimators are provided. A simulation study is conducted for comparing the performances of various competing estimators.  相似文献   

17.
Four general classes of partially balanced designs for 2n factorials, corresponding to four different forms of a general null hypothesis H on factorial effects, are presented. For the typical design in each class, the simplified form of the non-centrality parameter λ2 of the asymptotic chi-square distribution of the likelihood ratio statistic for testing the corresponding form of H0 is derived under defined local alternatives. Optimal designs d1 maximizing λ2 in the i-th class and minimizing the trace, determinant and largest eigenvalue of a defined covariance matrix, i =1,…,4, are determined.  相似文献   

18.
In this paper, we are interested in the weighted distributions of a bivariate three parameter logarithmic series distribution studied by Kocherlakota and Kocherlakota (1990). The weighted versions of the model are derived with weight W(x,y) = x[r] y[s]. Explicit expressions for the probability mass function and probability generating functions are derived in the case r = s = l. The marginal and conditional distributions are derived in the general case. The maximum likelihood estimation of the parameters, in both two parameter and three parameter cases, is studied. A procedure for computer generation of bivariate data from a discrete distribution is described. This enables us to present two examples, in order to illustrate the methods developed, for finding the maximum likelihood estimates.  相似文献   

19.
The fisher–Bingham family is potentially useful class of spherical distributions, but its practical application has been hindered by various problems, including; the identification of the form of the distribution, given the parameter values; and finding an effective technique for calculating the normalising constant, a requirement for maximum likelihood estimation. It is explained how these difficulties can be resolved. and then parameter estimation and hypothesis testing are discussed. A practical example is given.  相似文献   

20.
We address the problem of parameter estimation in multivariate distributions under ignorable non-monotone missing data. The factoring likelihood method for monotone missing data, termed by Rubin (1974), is applied to a more general case of non-monotone missing data. The proposed method is asymptotically equivalent to the Fisher scoring method from the observed likelihood, but avoids the burden of computing the first and second partial derivatives of the observed likelihood. Instead, the maximum likelihood estimates and their information matrices for each partition of the data set are computed separately and combined naturally using the generalized least squares method. A numerical example is presented to illustrate the method.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号