共查询到20条相似文献,搜索用时 140 毫秒
1.
Recent research on finding appropriate composite endpoints for preclinical Alzheimer's disease has focused considerable effort on finding “optimized” weights in the construction of a weighted composite score. In this paper, several proposed methods are reviewed. Our results indicate no evidence that these methods will increase the power of the test statistics, and some of these weights will introduce biases to the study. Our recommendation is to focus on identifying more sensitive items from clinical practice and appropriate statistical analyses of a large Alzheimer's data set. Once a set of items has been selected, there is no evidence that adding weights will generate more sensitive composite endpoints. 相似文献
2.
A longitudinal mixture model for classifying patients into responders and non‐responders is established using both likelihood‐based and Bayesian approaches. The model takes into consideration responders in the control group. Therefore, it is especially useful in situations where the placebo response is strong, or in equivalence trials where the drug in development is compared with a standard treatment. Under our model, a treatment shows evidence of being effective if it increases the proportion of responders or increases the response rate among responders in the treated group compared with the control group. Therefore, the model has flexibility to accommodate different situations. The proposed method is illustrated using simulation and a depression clinical trial dataset for the likelihood‐based approach, and the same depression clinical trial dataset for the Bayesian approach. The likelihood‐based and Bayesian approaches generated consistent results for the depression trial data. In both the placebo group and the treated group, patients are classified into two components with distinct response rate. The proportion of responders is shown to be significantly higher in the treated group compared with the control group, suggesting the treatment paroxetine is effective. Copyright © 2014 John Wiley & Sons, Ltd. 相似文献
3.
Hoai‐Thu Thai France Mentré Nicholas H.G. Holford Christine Veyrat‐Follet Emmanuelle Comets 《Pharmaceutical statistics》2013,12(3):129-140
A version of the nonparametric bootstrap, which resamples the entire subjects from original data, called the case bootstrap, has been increasingly used for estimating uncertainty of parameters in mixed‐effects models. It is usually applied to obtain more robust estimates of the parameters and more realistic confidence intervals (CIs). Alternative bootstrap methods, such as residual bootstrap and parametric bootstrap that resample both random effects and residuals, have been proposed to better take into account the hierarchical structure of multi‐level and longitudinal data. However, few studies have been performed to compare these different approaches. In this study, we used simulation to evaluate bootstrap methods proposed for linear mixed‐effect models. We also compared the results obtained by maximum likelihood (ML) and restricted maximum likelihood (REML). Our simulation studies evidenced the good performance of the case bootstrap as well as the bootstraps of both random effects and residuals. On the other hand, the bootstrap methods that resample only the residuals and the bootstraps combining case and residuals performed poorly. REML and ML provided similar bootstrap estimates of uncertainty, but there was slightly more bias and poorer coverage rate for variance parameters with ML in the sparse design. We applied the proposed methods to a real dataset from a study investigating the natural evolution of Parkinson's disease and were able to confirm that the methods provide plausible estimates of uncertainty. Given that most real‐life datasets tend to exhibit heterogeneity in sampling schedules, the residual bootstraps would be expected to perform better than the case bootstrap. Copyright © 2013 John Wiley & Sons, Ltd. 相似文献
4.
Craig H. Mallinckrodt John G. Watkin Geert Molenberghs Raymond J. Carroll 《Pharmaceutical statistics》2004,3(3):161-169
Missing data, and the bias they can cause, are an almost ever‐present concern in clinical trials. The last observation carried forward (LOCF) approach has been frequently utilized to handle missing data in clinical trials, and is often specified in conjunction with analysis of variance (LOCF ANOVA) for the primary analysis. Considerable advances in statistical methodology, and in our ability to implement these methods, have been made in recent years. Likelihood‐based, mixed‐effects model approaches implemented under the missing at random (MAR) framework are now easy to implement, and are commonly used to analyse clinical trial data. Furthermore, such approaches are more robust to the biases from missing data, and provide better control of Type I and Type II errors than LOCF ANOVA. Empirical research and analytic proof have demonstrated that the behaviour of LOCF is uncertain, and in many situations it has not been conservative. Using LOCF as a composite measure of safety, tolerability and efficacy can lead to erroneous conclusions regarding the effectiveness of a drug. This approach also violates the fundamental basis of statistics as it involves testing an outcome that is not a physical parameter of the population, but rather a quantity that can be influenced by investigator behaviour, trial design, etc. Practice should shift away from using LOCF ANOVA as the primary analysis and focus on likelihood‐based, mixed‐effects model approaches developed under the MAR framework, with missing not at random methods used to assess robustness of the primary analysis. Copyright © 2004 John Wiley & Sons, Ltd. 相似文献
5.
Luoxi Shi Dorothy K. Hatsukami Joseph S. Koopmeiners Chap T. Le Neal L. Benowitz Eric C. Donny Xianghua Luo 《Pharmaceutical statistics》2021,20(6):1249-1264
A simple approach for analyzing longitudinally measured biomarkers is to calculate summary measures such as the area under the curve (AUC) for each individual and then compare the mean AUC between treatment groups using methods such as t test. This two-step approach is difficult to implement when there are missing data since the AUC cannot be directly calculated for individuals with missing measurements. Simple methods for dealing with missing data include the complete case analysis and imputation. A recent study showed that the estimated mean AUC difference between treatment groups based on the linear mixed model (LMM), rather than on individually calculated AUCs by simple imputation, has negligible bias under random missing assumptions and only small bias when missing is not at random. However, this model assumes the outcome to be normally distributed, which is often violated in biomarker data. In this paper, we propose to use a LMM on log-transformed biomarkers, based on which statistical inference for the ratio, rather than difference, of AUC between treatment groups is provided. The proposed method can not only handle the potential baseline imbalance in a randomized trail but also circumvent the estimation of the nuisance variance parameters in the log-normal model. The proposed model is applied to a recently completed large randomized trial studying the effect of nicotine reduction on biomarker exposure of smokers. 相似文献
6.
Anuradha Roy 《Journal of applied statistics》2008,35(3):307-320
The number of parameters mushrooms in a linear mixed effects (LME) model in the case of multivariate repeated measures data. Computation of these parameters is a real problem with the increase in the number of response variables or with the increase in the number of time points. The problem becomes more intricate and involved with the addition of additional random effects. A multivariate analysis is not possible in a small sample setting. We propose a method to estimate these many parameters in bits and pieces from baby models, by taking a subset of response variables at a time, and finally using these bits and pieces at the end to get the parameter estimates for the mother model, with all variables taken together. Applying this method one can calculate the fixed effects, the best linear unbiased predictions (BLUPs) for the random effects in the model, and also the BLUPs at each time of observation for each response variable, to monitor the effectiveness of the treatment for each subject. The proposed method is illustrated with an example of multiple response variables measured over multiple time points arising from a clinical trial in osteoporosis. 相似文献
7.
In this article, we extended the widely used Bland-Altman graphical technique of comparing two measurements in clinical studies to include an analytical approach using a linear mixed model. The proposed statistical inferences can be conducted easily by commercially available statistical software such as SAS. The linear mixed model approach was illustrated using a real example in a clinical nursing study of oxygen saturation measurements, when functional oxygen saturation was compared against fractional oxy-hemoglobin. 相似文献
8.
Sanjoy K. Sinha 《Revue canadienne de statistique》2006,34(2):261-278
The author develops a robust quasi‐likelihood method, which appears to be useful for down‐weighting any influential data points when estimating the model parameters. He illustrates the computational issues of the method in an example. He uses simulations to study the behaviour of the robust estimates when data are contaminated with outliers, and he compares these estimates to those obtained by the ordinary quasi‐likelihood method. 相似文献
9.
Divan Aristo Burger Robert Schall Rianne Jacobs Ding‐Geng Chen 《Pharmaceutical statistics》2019,18(4):420-432
In this paper, we investigate Bayesian generalized nonlinear mixed‐effects (NLME) regression models for zero‐inflated longitudinal count data. The methodology is motivated by and applied to colony forming unit (CFU) counts in extended bactericidal activity tuberculosis (TB) trials. Furthermore, for model comparisons, we present a generalized method for calculating the marginal likelihoods required to determine Bayes factors. A simulation study shows that the proposed zero‐inflated negative binomial regression model has good accuracy, precision, and credibility interval coverage. In contrast, conventional normal NLME regression models applied to log‐transformed count data, which handle zero counts as left censored values, may yield credibility intervals that undercover the true bactericidal activity of anti‐TB drugs. We therefore recommend that zero‐inflated NLME regression models should be fitted to CFU count on the original scale, as an alternative to conventional normal NLME regression models on the logarithmic scale. 相似文献
10.
In this article, we apply the Bayesian approach to the linear mixed effect models with autoregressive(p) random errors under mixture priors obtained with the Markov chain Monte Carlo (MCMC) method. The mixture structure of a point mass and continuous distribution can help to select the variables in fixed and random effects models from the posterior sample generated using the MCMC method. Bayesian prediction of future observations is also one of the major concerns. To get the best model, we consider the commonly used highest posterior probability model and the median posterior probability model. As a result, both criteria tend to be needed to choose the best model from the entire simulation study. In terms of predictive accuracy, a real example confirms that the proposed method provides accurate results. 相似文献
11.
As researchers increasingly rely on linear mixed models to characterize longitudinal data, there is a need for improved techniques for selecting among this class of models which requires specification of both fixed and random effects via a mean model and variance-covariance structure. The process is further complicated when fixed and/or random effects are non nested between models. This paper explores the development of a hypothesis test to compare non nested linear mixed models based on extensions of the work begun by Sir David Cox. We assess the robustness of this approach for comparing models containing correlated measures of body fat for predicting longitudinal cardiometabolic risk. 相似文献
12.
Sunanda Bagchi Mukhopadhyay 《统计学通讯:理论与方法》2013,42(12):3565-3576
In the present paper, under the assumption of a mixed effects model with random block effects, the type 1 optimality of the most balanced group divisible designs of type 1 has been established within the general class of all proper and connected block designs with k<v. 相似文献
13.
ABSTRACTIn this study, a Generalized, Multi-Stage Adjusted, Latent Class Linear Mixed Model is proposed for modeling the heterogeneous distributed phenotype and genetic information across the whole genome in the presence of both serial and familial correlations. Genome data were analyzed by applying the proposed model to Genetic Analysis Workshop (GAW) data, and the model results were compared to the results of standard models. Moreover, the potential of the model is discussed compared to simulated data. As a result of model comparisons, the information criteria and the genomic control parameter were found to be smaller. The results of a power analysis show that the proposed model is more powerful. 相似文献
14.
Kuo-Chin Lin 《Journal of applied statistics》2016,43(11):2053-2064
Categorical longitudinal data are frequently applied in a variety of fields, and are commonly fitted by generalized linear mixed models (GLMMs) and generalized estimating equations models. The cumulative logit is one of the useful link functions to deal with the problem involving repeated ordinal responses. To check the adequacy of the GLMMs with cumulative logit link function, two goodness-of-fit tests constructed by the unweighted sum of squared model residuals using numerical integration and bootstrap resampling technique are proposed. The empirical type I error rates and powers of the proposed tests are examined by simulation studies. The ordinal longitudinal studies are utilized to illustrate the application of the two proposed tests. 相似文献
15.
Markov regression models are useful tools for estimating the impact of risk factors on rates of transition between multiple disease states. Alzheimer's disease (AD) is an example of a multi-state disease process in which great interest lies in identifying risk factors for transition. In this context, non-homogeneous models are required because transition rates change as subjects age. In this report we propose a non-homogeneous Markov regression model that allows for reversible and recurrent disease states, transitions among multiple states between observations, and unequally spaced observation times. We conducted simulation studies to demonstrate performance of estimators for covariate effects from this model and compare performance with alternative models when the underlying non-homogeneous process was correctly specified and under model misspecification. In simulation studies, we found that covariate effects were biased if non-homogeneity of the disease process was not accounted for. However, estimates from non-homogeneous models were robust to misspecification of the form of the non-homogeneity. We used our model to estimate risk factors for transition to mild cognitive impairment (MCI) and AD in a longitudinal study of subjects included in the National Alzheimer's Coordinating Center's Uniform Data Set. Using our model, we found that subjects with MCI affecting multiple cognitive domains were significantly less likely to revert to normal cognition. 相似文献
16.
Mixed model selection is quite important in statistical literature. To assist the mixed model selection, we employ the adaptive LASSO penalized term to propose a two-stage selection procedure for the purpose of choosing both the random and fixed effects. In the first stage, we utilize the penalized restricted profile log-likelihood to choose the random effects; in the second stage, after the random effects are determined, we apply the penalized profile log-likelihood to select the fixed effects. In each stage, the Newton–Raphson algorithm is performed to complete the parameter estimation. We prove that the proposed procedure is consistent and possesses the oracle properties. The simulations and a real data application are conducted for demonstrating the effectiveness of the proposed selection procedure. 相似文献
17.
The mixed model is defined. The exact posterior distribution for the fixed effect vector is obtained. The exact posterior distribution for the error variance is obtained. The exact posterior mean and variance of a Bayesian estimator for the variances of random effects is also derived. All computations are non-iterative and avoid numerical integrations. 相似文献
18.
《Journal of Statistical Computation and Simulation》2012,82(18):3413-3452
The purpose of this article is to obtain the jackknifed ridge predictors in the linear mixed models and to examine the superiorities, the linear combinations of the jackknifed ridge predictors over the ridge, principal components regression, r?k class and Henderson's predictors in terms of bias, covariance matrix and mean square error criteria. Numerical analyses are considered to illustrate the findings and a simulation study is conducted to see the performance of the jackknifed ridge predictors. 相似文献
19.
《Journal of Statistical Computation and Simulation》2012,82(3):419-430
A marginal–pairwise-likelihood estimation approach is examined in the mixed Rasch model with the binary response and logit link. This method belonging to the broad class of composite likelihood provides estimators with desirable asymptotic properties such as consistency and asymptotic normality. We study the performance of the proposed methodology when the random effect distribution is misspecified. A simulation study was conducted to compare this approach with the maximum marginal likelihood. The different results are also illustrated with an analysis of the real data set from a quality-of-life study. 相似文献
20.
《Journal of Statistical Computation and Simulation》2012,82(10):1281-1296
For longitudinal time series data, linear mixed models that contain both random effects across individuals and first-order autoregressive errors within individuals may be appropriate. Some statistical diagnostics based on the models under a proposed elliptical error structure are developed in this work. It is well known that the class of elliptical distributions offers a more flexible framework for modelling since it contains both light- and heavy-tailed distributions. Iterative procedures for the maximum-likelihood estimates of the model parameters are presented. Score tests for the presence of autocorrelation and the homogeneity of autocorrelation coefficients among individuals are constructed. The properties of test statistics are investigated through Monte Carlo simulations. The local influence method for the models is also given. The analysed results of a real data set illustrate the values of the models and diagnostic statistics. 相似文献