首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
This paper proposes the use of the integrated likelihood for inference on the mean effect in small‐sample meta‐analysis for continuous outcomes. The method eliminates the nuisance parameters given by variance components through integration with respect to a suitable weight function, with no need to estimate them. The integrated likelihood approach takes into proper account the estimation uncertainty of within‐study variances, thus providing confidence intervals with empirical coverage closer to nominal levels than standard likelihood methods. The improvement is remarkable when either (i) the number of studies is small to moderate or (ii) the small sample size of the studies does not allow to consider the within‐study variances as known, as common in applications. Moreover, the use of the integrated likelihood avoids numerical pitfalls related to the estimation of variance components which can affect alternative likelihood approaches. The proposed methodology is illustrated via simulation and applied to a meta‐analysis study in nutritional science.  相似文献   

2.
Asymptotic variance plays an important role in the inference using interval estimate of attributable risk. This paper compares asymptotic variances of attributable risk estimate using the delta method and the Fisher information matrix for a 2×2 case–control study due to the practicality of applications. The expressions of these two asymptotic variance estimates are shown to be equivalent. Because asymptotic variance usually underestimates the standard error, the bootstrap standard error has also been utilized in constructing the interval estimates of attributable risk and compared with those using asymptotic estimates. A simulation study shows that the bootstrap interval estimate performs well in terms of coverage probability and confidence length. An exact test procedure for testing independence between the risk factor and the disease outcome using attributable risk is proposed and is justified for the use with real-life examples for a small-sample situation where inference using asymptotic variance may not be valid.  相似文献   

3.
Simple heterogeneity variance estimation for meta-analysis   总被引:2,自引:0,他引:2  
Summary.  A simple method of estimating the heterogeneity variance in a random-effects model for meta-analysis is proposed. The estimator that is presented is simple and easy to calculate and has improved bias compared with the most common estimator used in random-effects meta-analysis, particularly when the heterogeneity variance is moderate to large. In addition, it always yields a non-negative estimate of the heterogeneity variance, unlike some existing estimators. We find that random-effects inference about the overall effect based on this heterogeneity variance estimator is more reliable than inference using the common estimator, in terms of coverage probability for an interval estimate.  相似文献   

4.
In this paper, a small-sample asymptotic method is proposed for higher order inference in the stress–strength reliability model, R=P(Y<X), where X and Y are distributed independently as Burr-type X distributions. In a departure from the current literature, we allow the scale parameters of the two distributions to differ, and the likelihood-based third-order inference procedure is applied to obtain inference for R. The difficulty of the implementation of the method is in obtaining the the constrained maximum likelihood estimates (MLE). A penalized likelihood method is proposed to handle the numerical complications of maximizing the constrained likelihood model. The proposed procedures are illustrated using a sample of carbon fibre strength data. Our results from simulation studies comparing the coverage probabilities of the proposed small-sample asymptotic method with some existing large-sample asymptotic methods show that the proposed method is very accurate even when the sample sizes are small.  相似文献   

5.
ABSTRACT

The generalized case-cohort design is widely used in large cohort studies to reduce the cost and improve the efficiency. Taking prior information of parameters into consideration in modeling process can further raise the inference efficiency. In this paper, we consider fitting proportional hazards model with constraints for generalized case-cohort studies. We establish a working likelihood function for the estimation of model parameters. The asymptotic properties of the proposed estimator are derived via the Karush-Kuhn-Tucker conditions, and their finite properties are assessed by simulation studies. A modified minorization-maximization algorithm is developed for the numerical calculation of the constrained estimator. An application to a Wilms tumor study demonstrates the utility of the proposed method in practice.  相似文献   

6.
Interval-censored data naturally arise in many studies. For their regression analysis, many approaches have been proposed under various models and for most of them, the inference is carried out based on the asymptotic normality. In particular, Zhang et al. (2005) discussed the procedure under the linear transformation model. It is well-known that the symmetric property implied by the normal distribution may not be appropriate sometimes. Also the method could underestimate the variance of estimated parameters. This paper proposes an empirical likelihood-based procedure for the problem. Simulation and the analysis of a real data set are conducted to assess the performance of the procedure.  相似文献   

7.
With a parametric model, a measure of departure for an interest parameter is often easily constructed but frequently depends in distribution on nuisance parameters; the elimination of such nuisance parameter effects is a central problem of statistical inference. Fraser & Wong (1993) proposed a nuisance-averaging or approximate Studentization method for eliminating the nuisance parameter effects. They showed that, for many standard problems where an exact answer is available, the averaging method reproduces the exact answer. Also they showed that, if the exact answer is unavailable, as say in the gamma-mean problem, the averaging method provides a simple approximation which is very close to that obtained from third order asymptotic theory. The general asymptotic accuracy, however, of the method has not been examined. In this paper, we show in a general asymptotic context that the averaging method is asymptotically a second order procedure for eliminating the effects of nuisance parameters.  相似文献   

8.
This article investigates the asymptotic properties of quasi-maximum likelihood (QML) estimators for random-effects panel data transformation models where both the response and (some of) the covariates are subject to transformations for inducing normality, flexible functional form, homoskedasticity, and simple model structure. We develop a QML-type procedure for model estimation and inference. We prove the consistency and asymptotic normality of the QML estimators, and propose a simple bootstrap procedure that leads to a robust estimate of the variance-covariance (VC) matrix. Monte Carlo results reveal that the QML estimators perform well in finite samples, and that the gains by using the robust VC matrix estimate for inference can be enormous.  相似文献   

9.
叶光 《统计研究》2011,28(3):99-106
 针对完全修正最小二乘(full-modified ordinary least square,简称FMOLS)估计方法,给出一种协整参数的自举推断程序,证明零假设下自举统计量与检验统计量具有相同的渐近分布。关于检验功效的研究表明,虽然有约束自举的实际检验水平表现良好,但如果零假设不成立,自举统计量的分布是不确定的,因而其经验分布不能作为检验统计量精确分布的有效估计。实际应用中建议使用无约束自举,因为无论观测数据是否满足零假设,其自举统计量与零假设下检验统计量都具有相同的渐近分布。最后,利用蒙特卡洛模拟对自举推断和渐近推断的有限样本表现进行比较研究。  相似文献   

10.
The case-cohort design is widely used as a means of reducing the cost in large cohort studies, especially when the disease rate is low and covariate measurements may be expensive, and has been discussed by many authors. In this paper, we discuss regression analysis of case-cohort studies that produce interval-censored failure time with dependent censoring, a situation for which there does not seem to exist an established approach. For inference, a sieve inverse probability weighting estimation procedure is developed with the use of Bernstein polynomials to approximate the unknown baseline cumulative hazard functions. The proposed estimators are shown to be consistent and the asymptotic normality of the resulting regression parameter estimators is established. A simulation study is conducted to assess the finite sample properties of the proposed approach and indicates that it works well in practical situations. The proposed method is applied to an HIV/AIDS case-cohort study that motivated this investigation.  相似文献   

11.
Recently, exact inference under hybrid censoring scheme has attracted extensive attention in the field of reliability analysis. However, most of the authors neglect the possibility of competing risks model. This paper mainly discusses the exact likelihood inference for the analysis of generalized type-I hybrid censoring data with exponential competing failure model. Based on the maximum likelihood estimates for unknown parameters, we establish the exact conditional distribution of parameters by conditional moment generating function, and then obtain moment properties as well as exact confidence intervals (CIs) for parameters. Furthermore, approximate CIs are constructed by asymptotic distribution and bootstrap method as well. We also compare their performances with exact method through the use of Monte Carlo simulations. And finally, a real data set is analysed to illustrate the validity of all the methods developed here.  相似文献   

12.
In this paper, we investigate the estimation problem concerning a progressively type-II censored sample from the two-parameter bathtub-shaped lifetime distribution. We use the maximum likelihood method to obtain the point estimators of the parameters. We also provide a method for constructing an exact confidence interval and an exact joint confidence region for the parameters. Two numerical examples are presented to illustrate the method of inference developed here. Finally, Monte Carlo simulation studies are used to assess the performance of our proposed method.  相似文献   

13.
We investigate empirical likelihood for the additive hazards model with current status data. An empirical log-likelihood ratio for a vector or subvector of regression parameters is defined and its limiting distribution is shown to be a standard chi-squared distribution. The proposed inference procedure enables us to make empirical likelihood-based inference for the regression parameters. Finite sample performance of the proposed method is assessed in simulation studies to compare with that of a normal approximation method, it shows that the empirical likelihood method provides more accurate inference than the normal approximation method. A real data example is used for illustration.  相似文献   

14.
Doubly censored failure time data occur in many areas including demographical studies, epidemiology studies, medical studies and tumorigenicity experiments, and correspondingly some inference procedures have been developed in the literature (Biometrika, 91, 2004, 277; Comput. Statist. Data Anal., 57, 2013, 41; J. Comput. Graph. Statist., 13, 2004, 123). In this paper, we discuss regression analysis of such data under a class of flexible semiparametric transformation models, which includes some commonly used models for doubly censored data as special cases. For inference, the non‐parametric maximum likelihood estimation will be developed and in particular, we will present a novel expectation–maximization algorithm with the use of subject‐specific independent Poisson variables. In addition, the asymptotic properties of the proposed estimators are established and an extensive simulation study suggests that the proposed methodology works well for practical situations. The method is applied to an AIDS study.  相似文献   

15.
Abstract

The objective of this paper is to propose an efficient estimation procedure in a marginal mean regression model for longitudinal count data and to develop a hypothesis test for detecting the presence of overdispersion. We extend the matrix expansion idea of quadratic inference functions to the negative binomial regression framework that entails accommodating both the within-subject correlation and overdispersion issue. Theoretical and numerical results show that the proposed procedure yields a more efficient estimator asymptotically than the one ignoring either the within-subject correlation or overdispersion. When the overdispersion is absent in data, the proposed method might hinder the estimation efficiency in practice, yet the Poisson regression based regression model is fitted to the data sufficiently well. Therefore, we construct the hypothesis test that recommends an appropriate model for the analysis of the correlated count data. Extensive simulation studies indicate that the proposed test can identify the effective model consistently. The proposed procedure is also applied to a transportation safety study and recommends the proposed negative binomial regression model.  相似文献   

16.

This article discusses regression analysis of right-censored failure time data where there may exist a cured subgroup, and also covariate effects may be varying with time, a phenomena that often occurs in many medical studies. To address the problem, we discuss a class of varying coefficient transformation models along with a logistic model for the cured subgroup. For inference, a sieve maximum likelihood approach is developed with the use of spline functions, and the asymptotic properties of the proposed estimators are established. The proposed method can be easily implemented, and the conducted simulation study suggests that the proposed method works well in practical situations. An illustrative example is provided.

  相似文献   

17.
对于部分线性模型中非参数部分是否为多项式函数的检验问题,应该先确定其是否为多项式函数类。通过对部分线性模型的拟合残差进行再光滑,基于其变化的趋势性构造统计量以检验其是否为多项式函数类,给出了计算检验P-值的精确算法和三阶矩χ2逼近方法,模拟例子与实际例子充分显示了本方法的有效性。  相似文献   

18.
Under the case-cohort design introduced by Prentice (Biometrica 73:1–11, 1986), the covariate histories are ascertained only for the subjects who experience the event of interest (i.e., the cases) during the follow-up period and for a relatively small random sample from the original cohort (i.e., the subcohort). The case-cohort design has been widely used in clinical and epidemiological studies to assess the effects of covariates on failure times. Most statistical methods developed for the case-cohort design use the proportional hazards model, and few methods allow for time-varying regression coefficients. In addition, most methods disregard data from subjects outside of the subcohort, which can result in inefficient inference. Addressing these issues, this paper proposes an estimation procedure for the semiparametric additive hazards model with case-cohort/two-phase sampling data, allowing the covariates of interest to be missing for cases as well as for non-cases. A more flexible form of the additive model is considered that allows the effects of some covariates to be time varying while specifying the effects of others to be constant. An augmented inverse probability weighted estimation procedure is proposed. The proposed method allows utilizing the auxiliary information that correlates with the phase-two covariates to improve efficiency. The asymptotic properties of the proposed estimators are established. An extensive simulation study shows that the augmented inverse probability weighted estimation is more efficient than the widely adopted inverse probability weighted complete-case estimation method. The method is applied to analyze data from a preventive HIV vaccine efficacy trial.  相似文献   

19.
Quasi-stationary distributions have many applications in diverse research fields. We develop a bootstrap-based maximum likelihood (BML) method to deal with quasi-stationary distributions in statistical inference. To efficiently implement a bootstrap procedure that can handle the dependence among observations and speed up the computation, a novel block bootstrap algorithm is proposed to accommodate parallel bootstrap. In particular, we select a suitable block length for use with the parallel bootstrap. The estimation error is investigated to show its convergence. The proposed BML is shown to be asymptotically unbiased. Some numerical studies are given to examine the performance of the new algorithm. The advantages are evidenced through a comparison with some competitors and some examples are analysed for illustration.  相似文献   

20.
The generalized Birnbaum–Saunders distribution pertains to a class of lifetime models including both lighter and heavier tailed distributions. This model adapts well to lifetime data, even when outliers exist, and has other good theoretical properties and application perspectives. However, statistical inference tools may not exist in closed form for this model. Hence, simulation and numerical studies are needed, which require a random number generator. Three different ways to generate observations from this model are considered here. These generators are compared by utilizing a goodness-of-fit procedure as well as their effectiveness in predicting the true parameter values by using Monte Carlo simulations. This goodness-of-fit procedure may also be used as an estimation method. The quality of this estimation method is studied here. Finally, through a real data set, the generalized and classical Birnbaum–Saunders models are compared by using this estimation method.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号