首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Linear mixed‐effects models are a powerful tool for modelling longitudinal data and are widely used in practice. For a given set of covariates in a linear mixed‐effects model, selecting the covariance structure of random effects is an important problem. In this paper, we develop a joint likelihood‐based selection criterion. Our criterion is the approximately unbiased estimator of the expected Kullback–Leibler information. This criterion is also asymptotically optimal in the sense that for large samples, estimates based on the covariance matrix selected by the criterion minimize the approximate Kullback–Leibler information. Finite sample performance of the proposed method is assessed by simulation experiments. As an illustration, the criterion is applied to a data set from an AIDS clinical trial.  相似文献   

2.
Some studies generate data that can be grouped into clusters in more than one way. Consider for instance a smoking prevention study in which responses on smoking status are collected over several years in a cohort of students from a number of different schools. This yields longitudinal data, also cross‐sectionaliy clustered in schools. The authors present a model for analyzing binary data of this type, combining generalized estimating equations and estimation of random effects to address the longitudinal and cross‐sectional dependence, respectively. The estimation procedure for this model is discussed, as are the results of a simulation study used to investigate the properties of its estimates. An illustration using data from a smoking prevention trial is given.  相似文献   

3.
Two‐stage designs are widely used to determine whether a clinical trial should be terminated early. In such trials, a maximum likelihood estimate is often adopted to describe the difference in efficacy between the experimental and reference treatments; however, this method is known to display conditional bias. To reduce such bias, a conditional mean‐adjusted estimator (CMAE) has been proposed, although the remaining bias may be nonnegligible when a trial is stopped for efficacy at the interim analysis. We propose a new estimator for adjusting the conditional bias of the treatment effect by extending the idea of the CMAE. This estimator is calculated by weighting the maximum likelihood estimate obtained at the interim analysis and the effect size prespecified when calculating the sample size. We evaluate the performance of the proposed estimator through analytical and simulation studies in various settings in which a trial is stopped for efficacy or futility at the interim analysis. We find that the conditional bias of the proposed estimator is smaller than that of the CMAE when the information time at the interim analysis is small. In addition, the mean‐squared error of the proposed estimator is also smaller than that of the CMAE. In conclusion, we recommend the use of the proposed estimator for trials that are terminated early for efficacy or futility.  相似文献   

4.
In this paper, we investigate the performance of different parametric and nonparametric approaches for analyzing overdispersed person–time–event rates in the clinical trial setting. We show that the likelihood‐based parametric approach may not maintain the right size for the tested overdispersed person–time–event data. The nonparametric approaches may use an estimator as either the mean of the ratio of number of events over follow‐up time within each subjects or the ratio of the mean of the number of events over the mean follow‐up time in all the subjects. Among these, the ratio of the means is a consistent estimator and can be studied analytically. Asymptotic properties of all estimators were studied through numerical simulations. This research shows that the nonparametric ratio of the mean estimator is to be recommended in analyzing overdispersed person–time data. When sample size is small, some resampling‐based approaches can yield satisfactory results. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

5.
Dynamic treatment strategies are designed to change treatments over time in response to intermediate outcomes. They can be deployed for primary treatment as well as for the introduction of adjuvant treatment or other treatment‐enhancing interventions. When treatment interventions are delayed until needed, more cost‐efficient strategies will result. Sequential multiple assignment randomized (SMAR) trials allow for unbiased estimation of the marginal effects of different sequences of history‐dependent treatment decisions. Because a single SMAR trial enables evaluation of many different dynamic regimes at once, it is naturally thought to require larger sample sizes than the parallel randomized trial. In this paper, we compare power between SMAR trials studying a regime, where treatment boosting enters when triggered by an observed event, versus the parallel design, where a treatment boost is consistently prescribed over the entire study period. In some settings, we found that the dynamic design yields the more efficient trial for the detection of treatment activity. We develop one particular trial to compare a dynamic nursing intervention with telemonitoring for the enhancement of medication adherence in epilepsy patients. To this end, we derive from the SMAR trial data either an average of conditional treatment effects (‘conditional estimator’) or the population‐averaged (‘marginal’) estimator of the dynamic regimes. Analytical sample size calculations for the parallel design and the conditional estimator are compared with simulated results for the population‐averaged estimator. We conclude that in specific settings, well‐chosen SMAR designs may require fewer data for the development of more cost‐efficient treatment strategies than parallel designs. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

6.
The internal pilot study design allows for modifying the sample size during an ongoing study based on a blinded estimate of the variance thus maintaining the trial integrity. Various blinded sample size re‐estimation procedures have been proposed in the literature. We compare the blinded sample size re‐estimation procedures based on the one‐sample variance of the pooled data with a blinded procedure using the randomization block information with respect to bias and variance of the variance estimators, and the distribution of the resulting sample sizes, power, and actual type I error rate. For reference, sample size re‐estimation based on the unblinded variance is also included in the comparison. It is shown that using an unbiased variance estimator (such as the one using the randomization block information) for sample size re‐estimation does not guarantee that the desired power is achieved. Moreover, in situations that are common in clinical trials, the variance estimator that employs the randomization block length shows a higher variability than the simple one‐sample estimator and in turn the sample size resulting from the related re‐estimation procedure. This higher variability can lead to a lower power as was demonstrated in the setting of noninferiority trials. In summary, the one‐sample estimator obtained from the pooled data is extremely simple to apply, shows good performance, and is therefore recommended for application. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

7.
Background: In age‐related macular degeneration (ARMD) trials, the FDA‐approved endpoint is the loss (or gain) of at least three lines of vision as compared to baseline. The use of such a response endpoint entails a potentially severe loss of information. A more efficient strategy could be obtained by using longitudinal measures of the change in visual acuity. In this paper we investigate, by using data from two randomized clinical trials, the mean and variance–covariance structures of the longitudinal measurements of the change in visual acuity. Methods: Individual patient data were collected in 234 patients in a randomized trial comparing interferon‐ α with placebo and in 1181 patients in a randomized trial comparing three active doses of pegaptanib with sham. A linear model for longitudinal data was used to analyze the repeated measurements of the change in visual acuity. Results: For both trials, the data were adequately summarized by a model that assumed a quadratic trend for the mean change in visual acuity over time, a power variance function, and an antedependence correlation structure. The power variance function was remarkably similar for the two datasets and involved the square root of the measurement time. Conclusions: The similarity of the estimated variance functions and correlation structures for both datasets indicates that these aspects may be a genuine feature of the measurements of changes in visual acuity in patients with ARMD. The feature can be used in the planning and analysis of trials that use visual acuity as the clinical endpoint of interest. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

8.
Statistical analyses of recurrent event data have typically been based on the missing at random assumption. One implication of this is that, if data are collected only when patients are on their randomized treatment, the resulting de jure estimator of treatment effect corresponds to the situation in which the patients adhere to this regime throughout the study. For confirmatory analysis of clinical trials, sensitivity analyses are required to investigate alternative de facto estimands that depart from this assumption. Recent publications have described the use of multiple imputation methods based on pattern mixture models for continuous outcomes, where imputation for the missing data for one treatment arm (e.g. the active arm) is based on the statistical behaviour of outcomes in another arm (e.g. the placebo arm). This has been referred to as controlled imputation or reference‐based imputation. In this paper, we use the negative multinomial distribution to apply this approach to analyses of recurrent events and other similar outcomes. The methods are illustrated by a trial in severe asthma where the primary endpoint was rate of exacerbations and the primary analysis was based on the negative binomial model. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

9.
Summary.  In longitudinal studies missing data are the rule not the exception. We consider the analysis of longitudinal binary data with non-monotone missingness that is thought to be non-ignorable. In this setting a full likelihood approach is complicated algebraically and can be computationally prohibitive when there are many measurement occasions. We propose a 'protective' estimator that assumes that the probability that a response is missing at any occasion depends, in a completely unspecified way, on the value of that variable alone. Relying on this 'protectiveness' assumption, we describe a pseudolikelihood estimator of the regression parameters under non-ignorable missingness, without having to model the missing data mechanism directly. The method proposed is applied to CD4 cell count data from two longitudinal clinical trials of patients infected with the human immunodeficiency virus.  相似文献   

10.
Logistic models with a random intercept are prevalent in medical and social research where clustered and longitudinal data are often collected. Traditionally, the random intercept in these models is assumed to follow some parametric distribution such as the normal distribution. However, such an assumption inevitably raises concerns about model misspecification and misleading inference conclusions, especially when there is dependence between the random intercept and model covariates. To protect against such issues, we use a semiparametric approach to develop a computationally simple and consistent estimator where the random intercept is distribution‐free. The estimator is revealed to be optimal and achieve the efficiency bound without the need to postulate or estimate any latent variable distributions. We further characterize other general mixed models where such an optimal estimator exists.  相似文献   

11.
Prior information is often incorporated informally when planning a clinical trial. Here, we present an approach on how to incorporate prior information, such as data from historical clinical trials, into the nuisance parameter–based sample size re‐estimation in a design with an internal pilot study. We focus on trials with continuous endpoints in which the outcome variance is the nuisance parameter. For planning and analyzing the trial, frequentist methods are considered. Moreover, the external information on the variance is summarized by the Bayesian meta‐analytic‐predictive approach. To incorporate external information into the sample size re‐estimation, we propose to update the meta‐analytic‐predictive prior based on the results of the internal pilot study and to re‐estimate the sample size using an estimator from the posterior. By means of a simulation study, we compare the operating characteristics such as power and sample size distribution of the proposed procedure with the traditional sample size re‐estimation approach that uses the pooled variance estimator. The simulation study shows that, if no prior‐data conflict is present, incorporating external information into the sample size re‐estimation improves the operating characteristics compared to the traditional approach. In the case of a prior‐data conflict, that is, when the variance of the ongoing clinical trial is unequal to the prior location, the performance of the traditional sample size re‐estimation procedure is in general superior, even when the prior information is robustified. When considering to include prior information in sample size re‐estimation, the potential gains should be balanced against the risks.  相似文献   

12.
Missing data in clinical trials is a well‐known problem, and the classical statistical methods used can be overly simple. This case study shows how well‐established missing data theory can be applied to efficacy data collected in a long‐term open‐label trial with a discontinuation rate of almost 50%. Satisfaction with treatment in chronically constipated patients was the efficacy measure assessed at baseline and every 3 months postbaseline. The improvement in treatment satisfaction from baseline was originally analyzed with a paired t‐test ignoring missing data and discarding the correlation structure of the longitudinal data. As the original analysis started from missing completely at random assumptions regarding the missing data process, the satisfaction data were re‐examined, and several missing at random (MAR) and missing not at random (MNAR) techniques resulted in adjusted estimate for the improvement in satisfaction over 12 months. Throughout the different sensitivity analyses, the effect sizes remained significant and clinically relevant. Thus, even for an open‐label trial design, sensitivity analysis, with different assumptions for the nature of dropouts (MAR or MNAR) and with different classes of models (selection, pattern‐mixture, or multiple imputation models), has been found useful and provides evidence towards the robustness of the original analyses; additional sensitivity analyses could be undertaken to further qualify robustness. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

13.
The problem of the estimation of mean frequency of events in the presence of censoring is important in assessing the efficacy, safety and cost of therapies. The mean frequency is typically estimated by dividing the total number of events by the total number of patients under study. This method, referred to in this paper as the ‘naïve estimator’, ignores the censoring. Other approaches available for this problem require many assumptions that are rarely acceptable. These include the assumption of independence, constant hazard rate over time and other similar distributional assumptions. In this paper a simple non‐parametric estimator based on the sum of the products of Kaplan–Meier estimators is proposed as an estimator of mean frequency, and its approximate variance and standard error are derived. An illustration is provided to show the derivation of the proposed estimator. Although the clinical trial setting is used in this paper, the problem has applications in other areas where survival analysis is used and recurrent events are studied. Copyright © 2003 John Wiley & Sons, Ltd.  相似文献   

14.
In medical studies, it is often of interest to characterize the relationship between a time-to-event and covariates, not only time-independent but also time-dependent. Time-dependent covariates are generally measured intermittently and with error. Recent interests focus on the proportional hazards framework, with longitudinal data jointly modeled through a mixed effects model. However, approaches under this framework depend on the normality assumption of the error, and might encounter intractable numerical difficulties in practice. This motivates us to consider an alternative framework, that is, the additive hazards model, about which little research has been done when time-dependent covariates are measured with error. We propose a simple corrected pseudo-score approach for the regression parameters with no assumptions on the distribution of the random effects and the error beyond those for the variance structure of the latter. The estimator has an explicit form and is shown to be consistent and asymptotically normal. We illustrate the method via simulations and by application to data from an HIV clinical trial.  相似文献   

15.
The analysis of time‐to‐event data typically makes the censoring at random assumption, ie, that—conditional on covariates in the model—the distribution of event times is the same, whether they are observed or unobserved (ie, right censored). When patients who remain in follow‐up stay on their assigned treatment, then analysis under this assumption broadly addresses the de jure, or “while on treatment strategy” estimand. In such cases, we may well wish to explore the robustness of our inference to more pragmatic, de facto or “treatment policy strategy,” assumptions about the behaviour of patients post‐censoring. This is particularly the case when censoring occurs because patients change, or revert, to the usual (ie, reference) standard of care. Recent work has shown how such questions can be addressed for trials with continuous outcome data and longitudinal follow‐up, using reference‐based multiple imputation. For example, patients in the active arm may have their missing data imputed assuming they reverted to the control (ie, reference) intervention on withdrawal. Reference‐based imputation has two advantages: (a) it avoids the user specifying numerous parameters describing the distribution of patients' postwithdrawal data and (b) it is, to a good approximation, information anchored, so that the proportion of information lost due to missing data under the primary analysis is held constant across the sensitivity analyses. In this article, we build on recent work in the survival context, proposing a class of reference‐based assumptions appropriate for time‐to‐event data. We report a simulation study exploring the extent to which the multiple imputation estimator (using Rubin's variance formula) is information anchored in this setting and then illustrate the approach by reanalysing data from a randomized trial, which compared medical therapy with angioplasty for patients presenting with angina.  相似文献   

16.
Many directional data such as wind directions can be collected extremely easily so that experiments typically yield a huge number of data points that are sequentially collected. To deal with such big data, the traditional nonparametric techniques rapidly require a lot of time to be computed and therefore become useless in practice if real time or online forecasts are expected. In this paper, we propose a recursive kernel density estimator for directional data which (i) can be updated extremely easily when a new set of observations is available and (ii) keeps asymptotically the nice features of the traditional kernel density estimator. Our methodology is based on Robbins–Monro stochastic approximations ideas. We show that our estimator outperforms the traditional techniques in terms of computational time while being extremely competitive in terms of efficiency with respect to its competitors in the sequential context considered here. We obtain expressions for its asymptotic bias and variance together with an almost sure convergence rate and an asymptotic normality result. Our technique is illustrated on a wind dataset collected in Spain. A Monte‐Carlo study confirms the nice properties of our recursive estimator with respect to its non‐recursive counterpart.  相似文献   

17.
18.
A RANDOMIZED LONGITUDINAL PLAY-THE-WINNER DESIGN FOR REPEATED BINARY DATA   总被引:1,自引:0,他引:1  
In some clinical trials with two treatment arms, the patients enter the study at different times and are then allocated to one of two treatment groups. It is important for ethical reasons that there is greater probability of allocating a patient to the group that has displayed more favourable responses up to the patient's entry time. There are many adaptive designs in the literature to meet this ethical constraint, but most have a single binary response. Often the binary response is longitudinal in nature, being observed repeatedly over different monitoring times. This paper develops a randomized longitudinal play‐the‐winner design for such binary responses which meets the ethical constraint. Some performance characteristics of this design have been studied. It has been implemented in a trial of pulsed electro‐magnetic field therapy with rheumatoid arthritis patients.  相似文献   

19.
This article considers the estimation and testing of a within-group two-stage least squares (TSLS) estimator for instruments with varying degrees of weakness in a longitudinal (panel) data model. We show that adding the repeated cross-sectional information into a regression model can improve the estimation in weak instruments. Moreover, the consistency and limiting distribution of the TSLS estimator are established when both N and T tend to infinity. Some asymptotically pivotal tests are extended to a longitudinal data model and their asymptotic properties are examined. A Monte Carlo experiment is conducted to evaluate the finite sample performance of the proposed estimators.  相似文献   

20.
A longitudinal mixture model for classifying patients into responders and non‐responders is established using both likelihood‐based and Bayesian approaches. The model takes into consideration responders in the control group. Therefore, it is especially useful in situations where the placebo response is strong, or in equivalence trials where the drug in development is compared with a standard treatment. Under our model, a treatment shows evidence of being effective if it increases the proportion of responders or increases the response rate among responders in the treated group compared with the control group. Therefore, the model has flexibility to accommodate different situations. The proposed method is illustrated using simulation and a depression clinical trial dataset for the likelihood‐based approach, and the same depression clinical trial dataset for the Bayesian approach. The likelihood‐based and Bayesian approaches generated consistent results for the depression trial data. In both the placebo group and the treated group, patients are classified into two components with distinct response rate. The proportion of responders is shown to be significantly higher in the treated group compared with the control group, suggesting the treatment paroxetine is effective. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号