首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 671 毫秒
1.
In this article, small area estimation under a multivariate linear model for repeated measures data is considered. The proposed model aims to get a model which borrows strength both across small areas and over time. The model accounts for repeated surveys, grouped response units, and random effects variations. Estimation of model parameters is discussed within a likelihood based approach. Prediction of random effects, small area means across time points, and per group units are derived. A parametric bootstrap method is proposed for estimating the mean squared error of the predicted small area means. Results are supported by a simulation study.  相似文献   

2.
Abstract.  Stochastic differential equations have been shown useful in describing random continuous time processes. Biomedical experiments often imply repeated measurements on a series of experimental units and differences between units can be represented by incorporating random effects into the model. When both system noise and random effects are considered, stochastic differential mixed-effects models ensue. This class of models enables the simultaneous representation of randomness in the dynamics of the phenomena being considered and variability between experimental units, thus providing a powerful modelling tool with immediate applications in biomedicine and pharmacokinetic/pharmacodynamic studies. In most cases the likelihood function is not available, and thus maximum likelihood estimation of the unknown parameters is not possible. Here we propose a computationally fast approximated maximum likelihood procedure for the estimation of the non-random parameters and the random effects. The method is evaluated on simulations from some famous diffusion processes and on real data sets.  相似文献   

3.
We investigate mixed models for repeated measures data from cross-over studies in general, but in particular for data from thorough QT studies. We extend both the conventional random effects model and the saturated covariance model for univariate cross-over data to repeated measures cross-over (RMC) data; the resulting models we call the RMC model and Saturated model, respectively. Furthermore, we consider a random effects model for repeated measures cross-over data previously proposed in the literature. We assess the standard errors of point estimates and the coverage properties of confidence intervals for treatment contrasts under the various models. Our findings suggest: (i) Point estimates of treatment contrasts from all models considered are similar; (ii) Confidence intervals for treatment contrasts under the random effects model previously proposed in the literature do not have adequate coverage properties; the model therefore cannot be recommended for analysis of marginal QT prolongation; (iii) The RMC model and the Saturated model have similar precision and coverage properties; both models are suitable for assessment of marginal QT prolongation; and (iv) The Akaike Information Criterion (AIC) is not a reliable criterion for selecting a covariance model for RMC data in the following sense: the model with the smallest AIC is not necessarily associated with the highest precision for the treatment contrasts, even if the model with the smallest AIC value is also the most parsimonious model.  相似文献   

4.
We investigate mixed analysis of covariance models for the 'one-step' assessment of conditional QT prolongation. Initially, we consider three different covariance structures for the data, where between-treatment covariance of repeated measures is modelled respectively through random effects, random coefficients, and through a combination of random effects and random coefficients. In all three of those models, an unstructured covariance pattern is used to model within-treatment covariance. In a fourth model, proposed earlier in the literature, between-treatment covariance is modelled through random coefficients but the residuals are assumed to be independent identically distributed (i.i.d.). Finally, we consider a mixed model with saturated covariance structure. We investigate the precision and robustness of those models by fitting them to a large group of real data sets from thorough QT studies. Our findings suggest: (i) Point estimates of treatment contrasts from all five models are similar. (ii) The random coefficients model with i.i.d. residuals is not robust; the model potentially leads to both under- and overestimation of standard errors of treatment contrasts and therefore cannot be recommended for the analysis of conditional QT prolongation. (iii) The combined random effects/random coefficients model does not always converge; in the cases where it converges, its precision is generally inferior to the other models considered. (iv) Both the random effects and the random coefficients model are robust. (v) The random effects, the random coefficients, and the saturated model have similar precision and all three models are suitable for the one-step assessment of conditional QT prolongation.  相似文献   

5.
Mixed effect models, which contain both fixed effects and random effects, are frequently used in dealing with correlated data arising from repeated measurements (made on the same statistical units). In mixed effect models, the distributions of the random effects need to be specified and they are often assumed to be normal. The analysis of correlated data from repeated measurements can also be done with GEE by assuming any type of correlation as initial input. Both mixed effect models and GEE are approaches requiring distribution specifications (likelihood, score function). In this article, we consider a distribution-free least square approach under a general setting with missing value allowed. This approach does not require the specifications of the distributions and initial correlation input. Consistency and asymptotic normality of the estimation are discussed.  相似文献   

6.
The occurrence of missing data is an often unavoidable consequence of repeated measures studies. Fortunately, multivariate general linear models such as growth curve models and linear mixed models with random effects have been well developed to analyze incomplete normally-distributed repeated measures data. Most statistical methods have assumed that the missing data occur at random. This assumption may include two types of missing data mechanism: missing completely at random (MCAR) and missing at random (MAR) in the sense of Rubin (1976). In this paper, we develop a test procedure for distinguishing these two types of missing data mechanism for incomplete normally-distributed repeated measures data. The proposed test is similar in spiril to the test of Park and Davis (1992). We derive the test for incomplete normally-distribrlted repeated measures data using linear mixed models. while Park and Davis (1992) cleirved thr test for incomplete repeatctl categorical data in the framework of Grizzle Starmer. and Koch (1969). Thr proposed procedure can be applied easily to any other multivariate general linear model which allow for missing data. The test is illustrated using the hip-replacernent patient.data from Crowder and Hand (1990).  相似文献   

7.
Abstract.  The present work focuses on extensions of the posterior predictive p -value (ppp-value) for models with hierarchical structure, designed for testing assumptions made on underlying processes. The ppp-values are popular as tools for model criticism, yet their lack of a common interpretation limit their practical use. We discuss different extensions of ppp-values to hierarchical models, allowing for discrepancy measures that can be used for checking properties of the model at all stages. Through analytical derivations and simulation studies on simple models, we show that similar to the standard ppp-values, these extensions are typically far from uniformly distributed under the model assumptions and can give poor power in a hypothesis testing framework. We propose a calibration of the p -values, making the resulting calibrated p -values uniformly distributed under the model conditions. Illustrations are made through a real example of multinomial regression to age distributions of fish.  相似文献   

8.
The shared-parameter model and its so-called hierarchical or random-effects extension are widely used joint modeling approaches for a combination of longitudinal continuous, binary, count, missing, and survival outcomes that naturally occurs in many clinical and other studies. A random effect is introduced and shared or allowed to differ between two or more repeated measures or longitudinal outcomes, thereby acting as a vehicle to capture association between the outcomes in these joint models. It is generally known that parameter estimates in a linear mixed model (LMM) for continuous repeated measures or longitudinal outcomes allow for a marginal interpretation, even though a hierarchical formulation is employed. This is not the case for the generalized linear mixed model (GLMM), that is, for non-Gaussian outcomes. The aforementioned joint models formulated for continuous and binary or two longitudinal binomial outcomes, using the LMM and GLMM, will naturally have marginal interpretation for parameters associated with the continuous outcome but a subject-specific interpretation for the fixed effects parameters relating covariates to binary outcomes. To derive marginally meaningful parameters for the binary models in a joint model, we adopt the marginal multilevel model (MMM) due to Heagerty [13] and Heagerty and Zeger [14] and formulate a joint MMM for two longitudinal responses. This enables to (1) capture association between the two responses and (2) obtain parameter estimates that have a population-averaged interpretation for both outcomes. The model is applied to two sets of data. The results are compared with those obtained from the existing approaches such as generalized estimating equations, GLMM, and the model of Heagerty [13]. Estimates were found to be very close to those from single analysis per outcome but the joint model yields higher precision and allows for quantifying the association between outcomes. Parameters were estimated using maximum likelihood. The model is easy to fit using available tools such as the SAS NLMIXED procedure.  相似文献   

9.
Cross-over trials with correlated Bernoulli outcomes are common designs. In condom functionality studies, for example, an indicator of condom failure is reported for each sex act using standard or experimental condoms. Two popular analysis methods for such data are Generalized Estimating Equations and logit-normal random effects models. An alternative random effects model, the beta-binomial, is commonly used in contexts involving only between-cluster effects. The flexibility of the beta distribution and the interpretation of random effects as cluster-specific failure probabilities make it appealing, and we consider an extension of the model to account for within-cluster treatment effects using proportional odds assumptions.  相似文献   

10.
Linear mixed models (LMM) are frequently used to analyze repeated measures data, because they are more flexible to modelling the correlation within-subject, often present in this type of data. The most popular LMM for continuous responses assumes that both the random effects and the within-subjects errors are normally distributed, which can be an unrealistic assumption, obscuring important features of the variations present within and among the units (or groups). This work presents skew-normal liner mixed models (SNLMM) that relax the normality assumption by using a multivariate skew-normal distribution, which includes the normal ones as a special case and provides robust estimation in mixed models. The MCMC scheme is derived and the results of a simulation study are provided demonstrating that standard information criteria may be used to detect departures from normality. The procedures are illustrated using a real data set from a cholesterol study.  相似文献   

11.
Binary data are commonly used as responses to assess the effects of independent variables in longitudinal factorial studies. Such effects can be assessed in terms of the rate difference (RD), the odds ratio (OR), or the rate ratio (RR). Traditionally, the logistic regression seems always a recommended method with statistical comparisons made in terms of the OR. Statistical inference in terms of the RD and RR can then be derived using the delta method. However, this approach is hard to realize when repeated measures occur. To obtain statistical inference in longitudinal factorial studies, the current article shows that the mixed-effects model for repeated measures, the logistic regression for repeated measures, the log-transformed regression for repeated measures, and the rank-based methods are all valid methods that lead to inference in terms of the RD, OR, and RR, respectively. Asymptotic linear relationships between the estimators of the regression coefficients of these models are derived when the weight (working covariance) matrix is an identity matrix. Conditions for the Wald-type tests to be asymptotically equivalent in these models are provided and powers were compared using simulation studies. A phase III clinical trial is used to illustrate the investigated methods with corresponding SAS® code supplied.  相似文献   

12.
The number of parameters mushrooms in a linear mixed effects (LME) model in the case of multivariate repeated measures data. Computation of these parameters is a real problem with the increase in the number of response variables or with the increase in the number of time points. The problem becomes more intricate and involved with the addition of additional random effects. A multivariate analysis is not possible in a small sample setting. We propose a method to estimate these many parameters in bits and pieces from baby models, by taking a subset of response variables at a time, and finally using these bits and pieces at the end to get the parameter estimates for the mother model, with all variables taken together. Applying this method one can calculate the fixed effects, the best linear unbiased predictions (BLUPs) for the random effects in the model, and also the BLUPs at each time of observation for each response variable, to monitor the effectiveness of the treatment for each subject. The proposed method is illustrated with an example of multiple response variables measured over multiple time points arising from a clinical trial in osteoporosis.  相似文献   

13.
Measuring the efficiency of public services: the limits of analysis   总被引:2,自引:0,他引:2  
Summary.  Policy makers are increasingly seeking to develop overall measures of the effi-ciency of public service organizations. For that, the use of 'off-the-shelf' statistical tools such as data envelopment analysis and stochastic frontier analysis have been advocated as tools to measure organizational efficiency. The analytical sophistication of such methods has reached an advanced stage of development. We discuss the context within which such models are deployed, their underlying assumptions and their usefulness for a regulator of public services. Four specific model building issues are discussed: the weights that are attached to public service outputs; the specification of the statistical model; the treatment of environmental influences on performance; the treatment of dynamic effects. The paper concludes with recommendations for policy makers and researchers on the development and use of efficiency measurement techniques.  相似文献   

14.
In a cluster randomized controlled trial (RCT), the number of randomized units is typically considerably smaller than in trials where the unit of randomization is the patient. If the number of randomized clusters is small, there is a reasonable chance of baseline imbalance between the experimental and control groups. This imbalance threatens the validity of inferences regarding post‐treatment intervention effects unless an appropriate statistical adjustment is used. Here, we consider application of the propensity score adjustment for cluster RCTs. For the purpose of illustration, we apply the propensity adjustment to a cluster RCT that evaluated an intervention to reduce suicidal ideation and depression. This approach to adjusting imbalance had considerable bearing on the interpretation of results. A simulation study demonstrates that the propensity adjustment reduced well over 90% of the bias seen in unadjusted models for the specifications examined. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

15.
Mixed model repeated measures (MMRM) is the most common analysis approach used in clinical trials for Alzheimer's disease and other progressive diseases measured with continuous outcomes over time. The model treats time as a categorical variable, which allows an unconstrained estimate of the mean for each study visit in each randomized group. Categorizing time in this way can be problematic when assessments occur off-schedule, as including off-schedule visits can induce bias, and excluding them ignores valuable information and violates the intention to treat principle. This problem has been exacerbated by clinical trial visits which have been delayed due to the COVID19 pandemic. As an alternative to MMRM, we propose a constrained longitudinal data analysis with natural cubic splines that treats time as continuous and uses test version effects to model the mean over time. Compared to categorical-time models like MMRM and models that assume a proportional treatment effect, the spline model is shown to be more parsimonious and precise in real clinical trial datasets, and has better power and Type I error in a variety of simulation scenarios.  相似文献   

16.
Polyvinyl chloride (PVC) products are typically complex composites, whose quality characteristics vary widely depending on the types and proportions of their components, as well as other processing factors. It is often required to optimize PVC production for specific applications at the highest cost efficiency. This study describes the design and analysis of a statistical experiment to investigate the effects of different parameters over the mechanical properties of PVC intended for use in electrical wire insulation. Four commonly used mixture components, namely, virgin PVC, recycled PVC, calcium carbonate, and a plasticizer, and two process variables, type of plasticizer and filler particle size, were examined. Statistical tools were utilized to analyze and optimize the mixture while simultaneously finding the proper process parameters. The mix was optimized to achieve required strength and ductility, as per ASTM D6096 while minimizing cost. The paper demonstrates how statistical models can help tailor complex polymeric composites in the presence of variations created by process variables.  相似文献   

17.
Beta regression is a suitable choice for modelling continuous response variables taking values on the unit interval. Data structures such as hierarchical, repeated measures and longitudinal typically induce extra variability and/or dependence and can be accounted for by the inclusion of random effects. In this sense, Statistical inference typically requires numerical methods, possibly combined with sampling algorithms. A class of Beta mixed models is adopted for the analysis of two real problems with grouped data structures. We focus on likelihood inference and describe the implemented algorithms. The first is a study on the life quality index of industry workers with data collected according to an hierarchical sampling scheme. The second is a study assessing the impact of hydroelectric power plants upon measures of water quality indexes up, downstream and at the reservoirs of the dammed rivers, with a nested and longitudinal data structure. Results from different algorithms are reported for comparison including from data-cloning, an alternative to numerical approximations which also allows assessing identifiability. Confidence intervals based on profiled likelihoods are compared with those obtained by asymptotic quadratic approximations, showing relevant differences for parameters related to the random effects. In both cases, the scientific hypothesis of interest was investigated by comparing alternative models, leading to relevant interpretations of the results within each context.  相似文献   

18.
Many commonly used statistical methods for data analysis or clinical trial design rely on incorrect assumptions or assume an over‐simplified framework that ignores important information. Such statistical practices may lead to incorrect conclusions about treatment effects or clinical trial designs that are impractical or that do not accurately reflect the investigator's goals. Bayesian nonparametric (BNP) models and methods are a very flexible new class of statistical tools that can overcome such limitations. This is because BNP models can accurately approximate any distribution or function and can accommodate a broad range of statistical problems, including density estimation, regression, survival analysis, graphical modeling, neural networks, classification, clustering, population models, forecasting and prediction, spatiotemporal models, and causal inference. This paper describes 3 illustrative applications of BNP methods, including a randomized clinical trial to compare treatments for intraoperative air leaks after pulmonary resection, estimating survival time with different multi‐stage chemotherapy regimes for acute leukemia, and evaluating joint effects of targeted treatment and an intermediate biological outcome on progression‐free survival time in prostate cancer.  相似文献   

19.
Survivaldata may include two different sources of variation, namely variationover time and variation over units. If both of these variationsare present, neglecting one of them can cause serious bias inthe estimations. Here we present an approach for discrete durationdata that includes both time–varying and unit–specificeffects to model these two variations simultaneously. The approachis a combination of a dynamic survival model with dynamic time–varyingbaseline and covariate effects and a frailty model measuringunobserved heterogeneity with random effects varying independentlyover units. Estimation is based on posterior modes, i.e., wemaximize the joint posterior distribution of the unknown parametersto avoid numerical integration and simulation techniques, thatare necessary in a full Bayesian analysis. Estimation of unknownhyperparameters is achieved by an EM–type algorithm. Finally,the proposed method is applied to data of the Veteran's AdministrationLung Cancer Trial.  相似文献   

20.
Fong  Daniel Y.T.  Lam  K.F.  Lawless  J.F.  Lee  Y.W. 《Lifetime data analysis》2001,7(4):345-362
We consider recurrent event data when the duration or gap times between successive event occurrences are of intrinsic interest. Subject heterogeneity not attributed to observed covariates is usually handled by random effects which result in an exchangeable correlation structure for the gap times of a subject. Recently, efforts have been put into relaxing this restriction to allow non-exchangeable correlation. Here we consider dynamic models where random effects can vary stochastically over the gap times. We extend the traditional Gaussian variance components models and evaluate a previously proposed proportional hazards model through a simulation study and some examples. Besides, semiparametric estimation of the proportional hazards models is considered. Both models are easily used. The Gaussian models are easily interpreted in terms of the variance structure. On the other hand, the proportional hazards models would be more appropriate in the context of survival analysis, particularly in the interpretation of the regression parameters. They can be sensitive to the choice of model for random effects but not to the choice of the baseline hazard function.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号