共查询到20条相似文献,搜索用时 15 毫秒
1.
Anuradha Roy 《Journal of applied statistics》2008,35(3):307-320
The number of parameters mushrooms in a linear mixed effects (LME) model in the case of multivariate repeated measures data. Computation of these parameters is a real problem with the increase in the number of response variables or with the increase in the number of time points. The problem becomes more intricate and involved with the addition of additional random effects. A multivariate analysis is not possible in a small sample setting. We propose a method to estimate these many parameters in bits and pieces from baby models, by taking a subset of response variables at a time, and finally using these bits and pieces at the end to get the parameter estimates for the mother model, with all variables taken together. Applying this method one can calculate the fixed effects, the best linear unbiased predictions (BLUPs) for the random effects in the model, and also the BLUPs at each time of observation for each response variable, to monitor the effectiveness of the treatment for each subject. The proposed method is illustrated with an example of multiple response variables measured over multiple time points arising from a clinical trial in osteoporosis. 相似文献
2.
Specific efficacy criteria were defined by the International Headache Society for controlled clinical trials on acute migraine. They are derived from the pain profile and the timing of rescue medication intake. We present a methodology to improve the analysis of such trials. Instead of analysing each endpoint separately, we model the joint distribution and derive success rates in any criteria as predictions. We use cumulative regression models for each response at a time and a multivariate normal copula to model the dependence between responses. Parameters are estimated using maximum likelihood. Benefits of the method include a reduction in the number of tests performed and an increase in their power. The method is well suited to dose–response trials from which predictions can be used to select doses and optimize the design of subsequent trials. More generally, our method permits a very flexible modelling of longitudinal series of ordinal data. Copyright © 2003 John Wiley & Sons, Ltd. 相似文献
3.
Least squares estimation of regression parameters in mixed effects models with unmeasured covariates
We consider mixed effects models for longitudinal, repeated measures or clustered data. Unmeasured or omitted covariates in such models may be correlated with the included covanates, and create model violations when not taken into account. Previous research and experience with longitudinal data sets suggest a general form of model which should be considered when omitted covariates are likely, such as in observational studies. We derive the marginal model between the response variable and included covariates, and consider model fitting using the ordinary and weighted least squares methods, which require simple non-iterative computation and no assumptions on the distribution of random covariates or error terms, Asymptotic properties of the least squares estimators are also discussed. The results shed light on the structure of least squares estimators in mixed effects models, and provide large sample procedures for statistical inference and prediction based on the marginal model. We present an example of the relationship between fluid intake and output in very low birth weight infants, where the model is found to have the assumed structure. 相似文献
4.
Statistical issues in the prospective monitoring of health outcomes across multiple units 总被引:1,自引:1,他引:1
Clare Marshall Nicky Best Alex Bottle Paul Aylin 《Journal of the Royal Statistical Society. Series A, (Statistics in Society)》2004,167(3):541-559
Summary. Following several recent inquiries in the UK into medical malpractice and failures to deliver appropriate standards of health care, there is pressure to introduce formal monitoring of performance outcomes routinely throughout the National Health Service. Statistical process control (SPC) charts have been widely used to monitor medical outcomes in a variety of contexts and have been specifically advocated for use in clinical governance. However, previous applications of SPC charts in medical monitoring have focused on surveillance of a single process over time. We consider some of the methodological and practical aspects that surround the routine surveillance of health outcomes and, in particular, we focus on two important methodological issues that arise when attempting to extend SPC charts to monitor outcomes at more than one unit simultaneously (where a unit could be, for example, a surgeon, general practitioner or hospital): the need to acknowledge the inevitable between-unit variation in 'acceptable' performance outcomes due to the net effect of many small unmeasured sources of variation (e.g. unmeasured case mix and data errors) and the problem of multiple testing over units as well as time. We address the former by using quasi-likelihood estimates of overdispersion, and the latter by using recently developed methods based on estimation of false discovery rates. We present an application of our approach to annual monitoring 'all-cause' mortality data between 1995 and 2000 from 169 National Health Service hospital trusts in England and Wales. 相似文献
5.
William H. Crown 《Pharmaceutical statistics》2021,20(5):945-951
This paper uses the decomposition framework from the economics literature to examine the statistical structure of treatment effects estimated with observational data compared to those estimated from randomized studies. It begins with the estimation of treatment effects using a dummy variable in regression models and then presents the decomposition method from economics which estimates separate regression models for the comparison groups and recovers the treatment effect using bootstrapping methods. This method shows that the overall treatment effect is a weighted average of structural relationships of patient features with outcomes within each treatment arm and differences in the distributions of these features across the arms. In large randomized trials, it is assumed that the distribution of features across arms is very similar. Importantly, randomization not only balances observed features but also unobserved. Applying high dimensional balancing methods such as propensity score matching to the observational data causes the distributional terms of the decomposition model to be eliminated but unobserved features may still not be balanced in the observational data. Finally, a correction for non-random selection into the treatment groups is introduced via a switching regime model. Theoretically, the treatment effect estimates obtained from this model should be the same as those from a randomized trial. However, there are significant challenges in identifying instrumental variables that are necessary for estimating such models. At a minimum, decomposition models are useful tools for understanding the relationship between treatment effects estimated from observational versus randomized data. 相似文献
6.
Gillian A. Lancaster 《Journal of the Royal Statistical Society. Series A, (Statistics in Society)》2009,172(4):707-727
Summary. The lack of outcome measures that are validated for use on children limits the effectiveness and generalizability of paediatric health care interventions. Statistical epidemiology is a broad concept encompassing a wide range of useful techniques for use in child health outcome assessment and development. However, the range of techniques that are available is often confusing and prohibits their adoption. In the paper an overview of methodology is provided within the paediatric context. It is demonstrated that in many cases assessment can be performed relatively straightforwardly by using standard statistical techniques, although sometimes more sophisticated techniques are required. Examples of both physiological and questionnaire-based outcomes are given. The usefulness of these techniques is highlighted for achieving specific objectives and ultimately for achieving methodological rigour in clinical outcome studies that are performed in the paediatric population. 相似文献
7.
Data‐analytic tools for models other than the normal linear regression model are relatively rare. Here we develop plots and diagnostic statistics for nonconstant variance for the random‐effects model (REM). REMs for longitudinal data include both within‐ and between‐subject variances. A basic assumption is that the two variance terms are constant across subjects. However, we often find that these variances are functions of covariates, and the data set has what we call explainable heterogeneity, which needs to be allowed for in the model. We characterize several types of heterogeneity of variance in REMs and develop three diagnostic tests using the score statistic: one for each of the two variance terms, and the third for a form of multivariate nonconstant variance. For each test we present an adjusted residual plot which can identify cases that are unusually influential on the outcome of the test. 相似文献
8.
Non‐likelihood‐based methods for repeated measures analysis of binary data in clinical trials can result in biased estimates of treatment effects and associated standard errors when the dropout process is not completely at random. We tested the utility of a multiple imputation approach in reducing these biases. Simulations were used to compare performance of multiple imputation with generalized estimating equations and restricted pseudo‐likelihood in five representative clinical trial profiles for estimating (a) overall treatment effects and (b) treatment differences at the last scheduled visit. In clinical trials with moderate to high (40–60%) dropout rates with dropouts missing at random, multiple imputation led to less biased and more precise estimates of treatment differences for binary outcomes based on underlying continuous scores. Copyright © 2005 John Wiley & Sons, Ltd. 相似文献
9.
《Journal of Statistical Computation and Simulation》2012,82(1-4):203-223
A Monte Carlo study was used to compare the Type I error rates and power of two nonparametric tests against the F test for the single-factor repeated measures model. The performance of the nonparametric Friedman and Conover tests was investigated for different distributions, numbers of blocks and numbers of repeated measures. The results indicated that the type of the distribution has little effect on the ability of the Friedman and Conover tests to control Type error rates. For power, the Friedman and Conover tests tended to agree in rejecting the same false hyporhesis when the design consisted of three repeated measures. However, the Conover test was more powerful than the Friedman test when the number of repeated measures was 4 or 5. Still, the F test is recommended for the single-factor repeated measures model because of its robustness to non-normality and its good power across a range of conditions. 相似文献
10.
The maximum likelihood equations for a multivariate normal model with structured mean and structured covariance matrix may not have an explicit solution. In some cases the model's error term may be decomposed as the sum of two independent error terms, each having a patterned covariance matrix, such that if one of the unobservable error terms is artificially treated as "missing data", the EM algorithm can be used to compute the maximum likelihood estimates for the original problem. Some decompositions produce likelihood equations which do not have an explicit solution at each iteration of the EM algorithm, but within-iteration explicit solutions are shown for two general classes of models including covariance component models used for analysis of longitudinal data. 相似文献
11.
12.
Hierarchical related regression for combining aggregate and individual data in studies of socio-economic disease risk factors 总被引:2,自引:0,他引:2
Christopher Jackson Nicky Best Sylvia Richardson 《Journal of the Royal Statistical Society. Series A, (Statistics in Society)》2008,171(1):159-178
Summary. To obtain information about the contribution of individual and area level factors to population health, it is desirable to use both data collected on areas, such as censuses, and on individuals, e.g. survey and cohort data. Recently developed models allow us to carry out simultaneous regressions on related data at the individual and aggregate levels. These can reduce 'ecological bias' that is caused by confounding, model misspecification or lack of information and increase power compared with analysing the data sets singly. We use these methods in an application investigating individual and area level sociodemographic predictors of the risk of hospital admissions for heart and circulatory disease in London. We discuss the practical issues that are encountered in this kind of data synthesis and demonstrate that this modelling framework is sufficiently flexible to incorporate a wide range of sources of data and to answer substantive questions. Our analysis shows that the variations that are observed are mainly attributable to individual level factors rather than the contextual effect of deprivation. 相似文献
13.
Likelihood-based, mixed-effects models for repeated measures (MMRMs) are occasionally used in primary analyses for group comparisons of incomplete continuous longitudinal data. Although MMRM analysis is generally valid under missing-at-random assumptions, it is invalid under not-missing-at-random (NMAR) assumptions. We consider the possibility of bias of estimated treatment effect using standard MMRM analysis in a motivational case, and propose simple and easily implementable pattern mixture models within the framework of mixed-effects modeling, to handle the NMAR data with differential missingness between treatment groups. The proposed models are a new form of pattern mixture model that employ a categorical time variable when modeling the outcome and a continuous time variable when modeling the missingness-data patterns. The models can directly provide an overall estimate of the treatment effect of interest using the average of the distribution of the missingness indicator and a categorical time variable in the same manner as MMRM analysis. Our simulation results indicate that the bias of the treatment effect for MMRM analysis was considerably larger than that for the pattern mixture model analysis under NMAR assumptions. In the case study, it would be dangerous to interpret only the results of the MMRM analysis, and the proposed pattern mixture model would be useful as a sensitivity analysis for treatment effect evaluation. 相似文献
14.
《Journal of Statistical Computation and Simulation》2012,82(1-2):77-92
Incomplete growth curve data often result from missing or mistimed observations in a repeated measures design. Virtually all methods of analysis rely on the dispersion matrix estimates. A Monte Carlo simulation was used to compare three methods of estimation of dispersion matrices for incomplete growth curve data. The three methods were: 1) maximum likelihood estimation with a smoothing algorithm, which finds the closest positive semidefinite estimate of the pairwise estimated dispersion matrix; 2) a mixed effects model using the EM (estimation maximization) algorithm; and 3) a mixed effects model with the scoring algorithm. The simulation included 5 dispersion structures, 20 or 40 subjects with 4 or 8 observations per subject and 10 or 30% missing data. In all the simulations, the smoothing algorithm was the poorest estimator of the dispersion matrix. In most cases, there were no significant differences between the scoring and EM algorithms. The EM algorithm tended to be better than the scoring algorithm when the variances of the random effects were close to zero, especially for the simulations with 4 observations per subject and two random effects. 相似文献
15.
In the present paper, simultaneous confidence interval estimates are constructed for the mortality measures RSMR. based on propor¬tional mortality measures SPMR. in epidemiologic studies for several competing risks of death to which the individuals in the study are exposed. It is demonstrated that, under a reasonable assumption, the joint sampling distribution of the statistics X. = RSMR./SPMR. for M competing risks9 may be approximated by means of a multi-variafe normal distribution, Sidak's (1967, 1968) mulfivariate normal probability inequalities are applied to construct the simultaneous confidence intervals for the measures RSMR., i=l3 2, ..., M. These are valid regardless of the covariance structure among the risks, As a particular case if the risks may be assumed as independent, our confidence intervals reduce to those for a single measure RSMR., which are narrower than those of Kupper et al., (1978), In this sense, our paper generalizes the results presently available in the literature in two directions; first, to obtain narrower confidence limits, and second3 to discuss the case of competing risks of death irrespective of their covariance structure. 相似文献
16.
Antonello Maruotti 《Journal of applied statistics》2009,36(7):709-722
The primary purpose of this paper is to comprehensively assess households’ burden due to health payments. Starting from the fairness approach developed by the World Health Organization, we analyse the burden of healthcare payments on Italian households by modeling catastrophic payments and impoverishment due to healthcare expenditures. For this purpose, we propose to extend the analysis of fairness in financing contribution through a generalized linear mixed models by introducing a bivariate correlated random effects model, where association between the outcomes is modeled through individual- and outcome-specific latent effects which are assumed to be correlated. We discuss model parameter estimation in a finite mixture context. By using such model specification, the fairness of the Italian national health service is investigated. 相似文献
17.
A basic assumption in distribution fitting is that a single family of distributions may deliver useful representation to the universe of available distributions. To date, little study has been conducted to compare the relative effectiveness of these families. In this article, five families are compared by fitting them to a sample of 20 distributions, using 2 fitting objectives: minimization of the L 2 norm and four-moment matching. Values of L 2 norm associated with the fitted families are used as input data to test for significant differences. The Pearson family and the RMM (Response Modeling Methodology) family significantly outperforms all other families. 相似文献
18.
Joel Schwartz 《Revue canadienne de statistique》1994,22(4):471-487
While most of epidemiology is observational, rather than experimental, the culture of epidemiology is still derived from agricultural experiments, rather than other observational fields, such as astronomy or economics. The mismatch is made greater as focus has turned to continue risk factors, multifactorial outcomes, and outcomes with large variation unexplainable by available risk factors. The analysis of such data is often viewed as hypothesis testing with statistical control replacing randomization. However, such approaches often test restricted forms of the hypothesis being investigated, such as the hypothesis of a linear association, when there is no prior empirical or theoretical reason to believe that if an association exists, it is linear. In combination with the large nonstochastic sources of error in such observational studies, this suggests the more flexible alternative of exploring the association. Conclusions on the possible causal nature of any discovered association will rest on the coherence and consistency of multiple studies. Nonparametric smoothing in general, and generalized additive models in particular, represent an attractive approach to such problems. This is illustrated using data examining the relationship between particulate air pollution and daily mortality in Birmingham, Alabama; between particulate air pollution, ozone, and SO2 and daily hospital admissions for respiratory illness in Philadelphia; and between ozone and particulate air pollution and coughing episodes in children in six eastern U.S. cities. The results indicate that airborne particles and ozone are associated with adverse health outcomes at very low concentrations, and that there are likely no thresholds for these relationships. 相似文献
19.
Proportional hazards are a common assumption when designing confirmatory clinical trials in oncology. This assumption not only affects the analysis part but also the sample size calculation. The presence of delayed effects causes a change in the hazard ratio while the trial is ongoing since at the beginning we do not observe any difference between treatment arms, and after some unknown time point, the differences between treatment arms will start to appear. Hence, the proportional hazards assumption no longer holds, and both sample size calculation and analysis methods to be used should be reconsidered. The weighted log‐rank test allows a weighting for early, middle, and late differences through the Fleming and Harrington class of weights and is proven to be more efficient when the proportional hazards assumption does not hold. The Fleming and Harrington class of weights, along with the estimated delay, can be incorporated into the sample size calculation in order to maintain the desired power once the treatment arm differences start to appear. In this article, we explore the impact of delayed effects in group sequential and adaptive group sequential designs and make an empirical evaluation in terms of power and type‐I error rate of the of the weighted log‐rank test in a simulated scenario with fixed values of the Fleming and Harrington class of weights. We also give some practical recommendations regarding which methodology should be used in the presence of delayed effects depending on certain characteristics of the trial. 相似文献
20.
Because of the recent regulatory emphasis on issues related to drug‐induced cardiac repolarization that can potentially lead to sudden death, QT interval analysis has received much attention in the clinical trial literature. The analysis of QT data is complicated by the fact that the QT interval is correlated with heart rate and other prognostic factors. Several attempts have been made in the literature to derive an optimal method for correcting the QT interval for heart rate; however the QT correction formulae obtained are not universal because of substantial variability observed across different patient populations. It is demonstrated in this paper that the widely used fixed QT correction formulae do not provide an adequate fit to QT and RR data and bias estimates of treatment effect. It is also shown that QT correction formulae derived from baseline data in clinical trials are likely to lead to Type I error rate inflation. This paper develops a QT interval analysis framework based on repeated‐measures models accomodating the correlation between QT interval and heart rate and the correlation among QT measurements collected over time. The proposed method of QT analysis controls the Type I error rate and is at least as powerful as traditional QT correction methods with respect to detecting drug‐related QT interval prolongation. Copyright © 2003 John Wiley & Sons, Ltd. 相似文献