首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The authors consider semiparametric efficient estimation of parameters in the conditional mean model for a simple incomplete data structure in which the outcome of interest is observed only for a random subset of subjects but covariates and surrogate (auxiliary) outcomes are observed for all. They use optimal estimating function theory to derive the semiparametric efficient score in closed form. They show that when covariates and auxiliary outcomes are discrete, a Horvitz‐Thompson type estimator with empirically estimated weights is semiparametric efficient. The authors give simulation studies validating the finite‐sample behaviour of the semiparametric efficient estimator and its asymptotic variance; they demonstrate the efficiency of the estimator in realistic settings.  相似文献   

2.
This study investigated the bias of factor loadings obtained from incomplete questionnaire data with imputed scores. Three models were used to generate discrete ordered rating scale data typical of questionnaires, also known as Likert data. These methods were the multidimensional polytomous latent trait model, a normal ogive item response theory model, and the discretized normal model. Incomplete data due to nonresponse were simulated using either missing completely at random or not missing at random mechanisms. Subsequently, for each incomplete data matrix, four imputation methods were applied for imputing item scores. Based on a completely crossed six-factor design, it was concluded that in general, bias was small for all data simulation methods and all imputation methods, and under all nonresponse mechanisms. Imputation method, two-way-plus-error, had the smallest bias in the factor loadings. Bias based on the discretized normal model was greater than that based on the other two models.  相似文献   

3.
Dependent and often incomplete outcomes are commonly found in longitudinal biomedical studies. We develop a likelihood function, which implements the autoregressive process of outcomes, incorporating the limit of detection problem and the probability of drop-out. The proposed approach incorporates the characteristics of the longitudinal data in biomedical research allowing us to carry out powerful tests to detect a difference between study populations in terms of the growth rate and drop-out rate. The formal notation of the likelihood function is developed, making it possible to adapt the proposed method easily for various different scenarios in terms of the number of groups to compare and a variety of growth trend patterns. Useful inferential properties for the proposed method are established, which take advantage of many well-developed theorems regarding the likelihood approach. A broad Monte-Carlo study confirms both the asymptotic results and illustrates good power properties of the proposed method. We apply the proposed method to three data sets obtained from mouse tumor experiments.  相似文献   

4.
A general framework is proposed for modelling clustered mixed outcomes. A mixture of generalized linear models is used to describe the joint distribution of a set of underlying variables, and an arbitrary function relates the underlying variables to be observed outcomes. The model accommodates multilevel data structures, general covariate effects and distinct link functions and error distributions for each underlying variable. Within the framework proposed, novel models are developed for clustered multiple binary, unordered categorical and joint discrete and continuous outcomes. A Markov chain Monte Carlo sampling algorithm is described for estimating the posterior distributions of the parameters and latent variables. Because of the flexibility of the modelling framework and estimation procedure, extensions to ordered categorical outcomes and more complex data structures are straightforward. The methods are illustrated by using data from a reproductive toxicity study.  相似文献   

5.
In an attempt to identify similarities between methods for estimating a mean function with different types of response or observation processes, we explore a general theoretical framework for nonparametric estimation of the mean function of a response process subject to incomplete observations. Special cases of the response process include quantitative responses and discrete state processes such as survival processes, counting processes and alternating binary processes. The incomplete data are assumed to arise from a general response-independent observation process, which includes right- censoring, interval censoring, periodic observation, and mixtures of these as special cases. We explore two criteria for defining nonparametric estimators, one based on the sample mean of available data and the other inspired by the construction of Kaplan-Meier (or product-limit) estimator [J. Am. Statist. Assoc. 53 (1958) 457] for right-censored survival data. We show that under regularity conditions the estimated mean functions resulting from both criteria are consistent and converge weakly to Gaussian processes, and provide consistent estimators of their covariance functions. We then evaluate these general criteria for specific responses and observation processes, and show how they lead to familiar estimators for some response and observation processes and new estimators for others. We illustrate the latter with data from an recently completed AIDS clinical trial.  相似文献   

6.
The case-cohort study design is widely used to reduce cost when collecting expensive covariates in large cohort studies with survival or competing risks outcomes. A case-cohort study dataset consists of two parts: (a) a random sample and (b) all cases or failures from a specific cause of interest. Clinicians often assess covariate effects on competing risks outcomes. The proportional subdistribution hazards model directly evaluates the effect of a covariate on the cumulative incidence function under the non-covariate-dependent censoring assumption for the full cohort study. However, the non-covariate-dependent censoring assumption is often violated in many biomedical studies. In this article, we propose a proportional subdistribution hazards model for case-cohort studies with stratified data with covariate-adjusted censoring weight. We further propose an efficient estimator when extra information from the other causes is available under case-cohort studies. The proposed estimators are shown to be consistent and asymptotically normal. Simulation studies show (a) the proposed estimator is unbiased when the censoring distribution depends on covariates and (b) the proposed efficient estimator gains estimation efficiency when using extra information from the other causes. We analyze a bone marrow transplant dataset and a coronary heart disease dataset using the proposed method.  相似文献   

7.
股票收益波动具有典型的连续函数特征,将其纳入连续动态函数范畴分析,能够挖掘现有离散分析方法不能揭示的深层次信息。本文基于连续动态函数视角研究上证50指数样本股票收益波动的类别模式和时段特征。首先由实际离散观测数据信息自行驱动,重构隐含在其中的本征收益波动函数。进一步,利用函数型主成分正交分解收益函数波动的主趋势,在无核心信息损失的主成分降维基础上,引入自适应权重聚类分析客观划分股票收益函数波动的模式类别。最后,利用函数型方差分析检验不同类别收益函数之间波动差异的显著性和稳健性,并基于波动函数周期性时段划分,图形展示和可视化剖析每一类别收益函数在不同时段波动的势能转化规律。研究发现:上证综指股票收益波动的主导趋势可以分解为四个子模式,50只股票存在五类显著的波动模式类别,并且5类波动模式的特征差异主要体现在本次研究区间的初始阶段。本文拓展了股票收益波动模式分类和差异因素分析的研究视角,能够为金融监管部门的管理策略制定和证券市场的投资组合配置提供实证支持。  相似文献   

8.
The 2 × 2 crossover trial uses subjects as their own control to reduce the intersubject variability in the treatment comparison, and typically requires fewer subjects than a parallel design. The generalized estimating equations (GEE) methodology has been commonly used to analyze incomplete discrete outcomes from crossover trials. We propose a unified approach to the power and sample size determination for the Wald Z-test and t-test from GEE analysis of paired binary, ordinal and count outcomes in crossover trials. The proposed method allows misspecification of the variance and correlation of the outcomes, missing outcomes, and adjustment for the period effect. We demonstrate that misspecification of the working variance and correlation functions leads to no or minimal efficiency loss in GEE analysis of paired outcomes. In general, GEE requires the assumption of missing completely at random. For bivariate binary outcomes, we show by simulation that the GEE estimate is asymptotically unbiased or only minimally biased, and the proposed sample size method is suitable under missing at random (MAR) if the working correlation is correctly specified. The performance of the proposed method is illustrated with several numerical examples. Adaption of the method to other paired outcomes is discussed.  相似文献   

9.
When there are more than two treatments under comparison, we may consider the use of the incomplete block crossover design (IBCD) to save the number of patients needed for a parallel groups design and reduce the duration of a crossover trial. We develop an asymptotic procedure for simultaneously testing equality of two treatments versus a control treatment (or placebo) in frequency data under the IBCD with two periods. We derive a sample size calculation procedure for the desired power of detecting the given treatment effects at a nominal-level and suggest a simple ad hoc adjustment procedure to improve the accuracy of the sample size determination when the resulting minimum required number of patients is not large. We employ Monte Carlo simulation to evaluate the finite-sample performance of the proposed test, the accuracy of the sample size calculation procedure, and that with the simple ad hoc adjustment suggested here. We use the data taken as a part of a crossover trial comparing the number of exacerbations between using salbutamol or salmeterol and a placebo in asthma patients to illustrate the sample size calculation procedure.  相似文献   

10.
In this paper, we propose an estimation method when sample data are incomplete. We decompose the likelihood according to missing patterns and combine the estimators based on each likelihood weighting by the Fisher information ratio. This approach provides a simple way of estimating parameters, especially for non‐monotone missing data. Numerical examples are presented to illustrate this method.  相似文献   

11.
Abstract.  In a case–cohort design a random sample from the study cohort, referred as a subcohort, and all the cases outside the subcohort are selected for collecting extra covariate data. The union of the selected subcohort and all cases are referred as the case–cohort set. Such a design is generally employed when the collection of information on an extra covariate for the study cohort is expensive. An advantage of the case–cohort design over more traditional case–control and the nested case–control designs is that it provides a set of controls which can be used for multiple end-points, in which case there is information on some covariates and event follow-up for the whole study cohort. Here, we propose a Bayesian approach to analyse such a case–cohort design as a cohort design with incomplete data on the extra covariate. We construct likelihood expressions when multiple end-points are of interest simultaneously and propose a Bayesian data augmentation method to estimate the model parameters. A simulation study is carried out to illustrate the method and the results are compared with the complete cohort analysis.  相似文献   

12.
We review the Fisher scoring and EM algorithms for incomplete multivariate data from an estimating function point of view, and examine the corresponding quasi-score functions under second-moment assumptions. A bias-corrected REML-type estimator for the covariance matrix is derived, and the Fisher, Godambe and empirical sandwich information matrices are compared. We make a numerical investigation of the two algorithms, and compare with a hybrid algorithm, where Fisher scoring is used for the mean vector and the EM algorithm for the covariance matrix.  相似文献   

13.
In the competing risks analysis, most inferences have been developed based on continuous failure time data. However, failure times are sometimes observed as being discrete. We propose nonparametric inferences for the cumulative incidence function for pure discrete data with competing risks. When covariate information is available, we propose semiparametric inferences for direct regression modelling of the cumulative incidence function for grouped discrete failure time data with competing risks. Simulation studies show that the procedures perform well. The proposed methods are illustrated with a study of contraceptive use in Indonesia.  相似文献   

14.
In this paper, we introduce a partially linear single-index additive hazards model with current status data. Both the unknown link function of the single-index term and the cumulative baseline hazard function are approximated by B-splines under a monotonicity constraint on the latter. The sieve method is applied to estimate the nonparametric and parametric components simultaneously. We show that, when the nonparametric link function is an exact B-spline, the resultant estimator of regression parameter vector is asymptotically normal and achieves the semiparametric information bound and the rate of convergence of the estimator for the cumulative baseline hazard function is optimal. Simulation studies are presented to examine the finite sample performance of the proposed estimation method. For illustration, we apply the method to a clinical dataset with current status outcome.  相似文献   

15.
Summary.  To help to design vaccines for acquired immune deficiency syndrome that protect broadly against many genetic variants of the human immunodeficiency virus, the mutation rates at 118 positions in HIV amino-acid sequences of subtype C versus those of subtype B were compared. The false discovery rate (FDR) multiple-comparisons procedure can be used to determine statistical significance. When the test statistics have discrete distributions, the FDR procedure can be made more powerful by a simple modification. The paper develops a modified FDR procedure for discrete data and applies it to the human immunodeficiency virus data. The new procedure detects 15 positions with significantly different mutation rates compared with 11 that are detected by the original FDR method. Simulations delineate conditions under which the modified FDR procedure confers large gains in power over the original technique. In general FDR adjustment methods can be improved for discrete data by incorporating the modification proposed.  相似文献   

16.
ABSTRACT

This article considers linear social interaction models under incomplete information that allow for missing outcome data due to sample selection. For model estimation, assuming that each individual forms his/her belief about the other members’ outcomes based on rational expectations, we propose a two-step series nonlinear least squares estimator. Both the consistency and asymptotic normality of the estimator are established. As an empirical illustration, we apply the proposed model and method to National Longitudinal Study of Adolescent Health (Add Health) data to examine the impacts of friendship interactions on adolescents’ academic achievements. We provide empirical evidence that the interaction effects are important determinants of grade point average and that controlling for sample selection bias has certain impacts on the estimation results. Supplementary materials for this article are available online.  相似文献   

17.
We summarize, review and comment upon three papers which discuss the use of discrete, noisy, incomplete, scattered pairwise dissimilarity data in statistical model building. Convex cone optimization codes are used to embed the objects into a Euclidean space which respects the dissimilarity information while controlling the dimension of the space. A “newbie” algorithm is provided for embedding new objects into this space. This allows the dissimilarity information to be incorporated into a smoothing spline ANOVA penalized likelihood model, a support vector machine, or any model that will admit reproducing kernel Hilbert space components, for nonparametric regression, supervised learning, or semisupervised learning. Future work and open questions are discussed. The papers are:  相似文献   

18.
Multiple imputation has emerged as a widely used model-based approach in dealing with incomplete data in many application areas. Gaussian and log-linear imputation models are fairly straightforward to implement for continuous and discrete data, respectively. However, in missing data settings which include a mix of continuous and discrete variables, correct specification of the imputation model could be a daunting task owing to the lack of flexible models for the joint distribution of variables of different nature. This complication, along with accessibility to software packages that are capable of carrying out multiple imputation under the assumption of joint multivariate normality, appears to encourage applied researchers for pragmatically treating the discrete variables as continuous for imputation purposes, and subsequently rounding the imputed values to the nearest observed category. In this article, I introduce a distance-based rounding approach for ordinal variables in the presence of continuous ones. The first step of the proposed rounding process is predicated upon creating indicator variables that correspond to the ordinal levels, followed by jointly imputing all variables under the assumption of multivariate normality. The imputed values are then converted to the ordinal scale based on their Euclidean distances to a set of indicators, with minimal distance corresponding to the closest match. I compare the performance of this technique to crude rounding via commonly accepted accuracy and precision measures with simulated data sets.  相似文献   

19.
We consider the local linear generalized method of moment (GMM) estimation of functional coefficient models with a mix of discrete and continuous data and in the presence of endogenous regressors. We establish the asymptotic normality of the estimator and derive the optimal instrumental variable that minimizes the asymptotic variance-covariance matrix among the class of all local linear GMM estimators. Data-dependent bandwidth sequences are also allowed for. We propose a nonparametric test for the constancy of the functional coefficients, study its asymptotic properties under the null hypothesis as well as a sequence of local alternatives and global alternatives, and propose a bootstrap version for it. Simulations are conducted to evaluate both the estimator and test. Applications to the 1985 Australian Longitudinal Survey data indicate a clear rejection of the null hypothesis of the constant rate of return to education, and that the returns to education obtained in earlier studies tend to be overestimated for all the work experience.  相似文献   

20.
We propose a flexible functional approach for modelling generalized longitudinal data and survival time using principal components. In the proposed model the longitudinal observations can be continuous or categorical data, such as Gaussian, binomial or Poisson outcomes. We generalize the traditional joint models that treat categorical data as continuous data by using some transformations, such as CD4 counts. The proposed model is data-adaptive, which does not require pre-specified functional forms for longitudinal trajectories and automatically detects characteristic patterns. The longitudinal trajectories observed with measurement error or random error are represented by flexible basis functions through a possibly nonlinear link function, combining dimension reduction techniques resulting from functional principal component (FPC) analysis. The relationship between the longitudinal process and event history is assessed using a Cox regression model. Although the proposed model inherits the flexibility of non-parametric methods, the estimation procedure based on the EM algorithm is still parametric in computation, and thus simple and easy to implement. The computation is simplified by dimension reduction for random coefficients or FPC scores. An iterative selection procedure based on Akaike information criterion (AIC) is proposed to choose the tuning parameters, such as the knots of spline basis and the number of FPCs, so that appropriate degree of smoothness and fluctuation can be addressed. The effectiveness of the proposed approach is illustrated through a simulation study, followed by an application to longitudinal CD4 counts and survival data which were collected in a recent clinical trial to compare the efficiency and safety of two antiretroviral drugs.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号