首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Baseline adjustment is an important consideration in thorough QT studies for non‐antiarrhythmic drugs. For crossover studies with period‐specific pre‐dose baselines, we propose a by‐time‐point analysis of covariance model with change from pre‐dose baseline as response, treatment as a fixed effect, pre‐dose baseline for current treatment and pre‐dose baseline averaged across treatments as covariates, and subject as a random effect. Additional factors such as period and sex should be included in the model as appropriate. Multiple pre‐dose measurements can be averaged to obtain a pre‐dose‐averaged baseline and used in the model. We provide conditions under which the proposed model is more efficient than other models. We demonstrate the efficiency and robustness of the proposed model both analytically and through simulation studies. The advantage of the proposed model is also illustrated using the data from a real clinical trial. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

2.

In evaluating the benefit of a treatment on survival, it is often of interest to compare post-treatment survival with the survival function that would have been observed in the absence of treatment. In many practical settings, treatment is time-dependent in the sense that subjects typically begin follow-up untreated, with some going on to receive treatment at some later time point. In observational studies, treatment is not assigned at random and, therefore, may depend on various patient characteristics. We have developed semi-parametric matching methods to estimate the average treatment effect on the treated (ATT) with respect to survival probability and restricted mean survival time. Matching is based on a prognostic score which reflects each patient’s death hazard in the absence of treatment. Specifically, each treated patient is matched with multiple as-yet-untreated patients with similar prognostic scores. The matched sets do not need to be of equal size, since each matched control is weighted in order to preserve risk score balancing across treated and untreated groups. After matching, we estimate the ATT non-parametrically by contrasting pre- and post-treatment weighted Nelson–Aalen survival curves. A closed-form variance is proposed and shown to work well in simulation studies. The proposed methods are applied to national organ transplant registry data.

  相似文献   

3.
To protect public-use microdata, one approach is not to allow users access to the microdata. Instead, users submit analyses to a remote computer that reports back basic output from the fitted model, such as coefficients and standard errors. To be most useful, this remote server also should provide some way for users to check the fit of their models, without disclosing actual data values. This paper discusses regression diagnostics for remote servers. The proposal is to release synthetic diagnostics—i.e. simulated values of residuals and dependent and independent variables–constructed to mimic the relationships among the real-data residuals and independent variables. Using simulations, it is shown that the proposed synthetic diagnostics can reveal model inadequacies without substantial increase in the risk of disclosures. This approach also can be used to develop remote server diagnostics for generalized linear models.  相似文献   

4.

Several authors ( e.g. Kim and DeMets, 1987a, Biometrics) have developed methods for estimation following group sequential tests in clinical trials when each patient has only one response. In many long-term clinical trials, the subjects enter the study sequentially and yield repeated measurements or other types of multivariate observations at successive follow-up visits. Typically, investigators want to compare a parameter of interest such as the slope over time in a repeated measures trial etc. In this article, we propose an exact confidence interval for these parameters in a repeated measures trial, and compare it with a naive confidence interval using Monte Carlo simulation. This method is illustrated with a real example for bone density measurements.  相似文献   

5.
In biomedical studies, it is of substantial interest to develop risk prediction scores using high-dimensional data such as gene expression data for clinical endpoints that are subject to censoring. In the presence of well-established clinical risk factors, investigators often prefer a procedure that also adjusts for these clinical variables. While accelerated failure time (AFT) models are a useful tool for the analysis of censored outcome data, it assumes that covariate effects on the logarithm of time-to-event are linear, which is often unrealistic in practice. We propose to build risk prediction scores through regularized rank estimation in partly linear AFT models, where high-dimensional data such as gene expression data are modeled linearly and important clinical variables are modeled nonlinearly using penalized regression splines. We show through simulation studies that our model has better operating characteristics compared to several existing models. In particular, we show that there is a non-negligible effect on prediction as well as feature selection when nonlinear clinical effects are misspecified as linear. This work is motivated by a recent prostate cancer study, where investigators collected gene expression data along with established prognostic clinical variables and the primary endpoint is time to prostate cancer recurrence. We analyzed the prostate cancer data and evaluated prediction performance of several models based on the extended c statistic for censored data, showing that 1) the relationship between the clinical variable, prostate specific antigen, and the prostate cancer recurrence is likely nonlinear, i.e., the time to recurrence decreases as PSA increases and it starts to level off when PSA becomes greater than 11; 2) correct specification of this nonlinear effect improves performance in prediction and feature selection; and 3) addition of gene expression data does not seem to further improve the performance of the resultant risk prediction scores.  相似文献   

6.
Abstract.  In an adaptive clinical trial research, it is common to use certain data-dependent design weights to assign individuals to treatments so that more study subjects are assigned to the better treatment. These design weights must also be used for consistent estimation of the treatment effects as well as the effects of the other prognostic factors. In practice, there are however situations where it may be necessary to collect binary responses repeatedly from an individual over a period of time and to obtain consistent estimates for the treatment effect as well as the effects of the other covariates in such a binary longitudinal set up. In this paper, we introduce a binary response-based longitudinal adaptive design for the allocation of individuals to a better treatment and propose a weighted generalized quasi-likelihood approach for the consistent and efficient estimation of the regression parameters including the treatment effects.  相似文献   

7.
Markers, which are prognostic longitudinal variables, can be used to replace some of the information lost due to right censoring. They may also be used to remove or reduce bias due to informative censoring. In this paper, the authors propose novel methods for using markers to increase the efficiency of log‐rank tests and hazard ratio estimation, as well as parametric estimation. They propose a «plug‐in» methodology that consists of writing the test statistic or estimate of interest as a functional of Kaplan–Meier estimators. The latter are then replaced by an efficient estimator of the survival curve that incorporates information from markers. Using simulations, the authors show that the resulting estimators and tests can be up to 30% more efficient than the usual procedures, provided that the marker is highly prognostic and that the frequency of censoring is high.  相似文献   

8.
The problem of statistical calibration of a measuring instrument can be framed both in a statistical context as well as in an engineering context. In the first, the problem is dealt with by distinguishing between the ‘classical’ approach and the ‘inverse’ regression approach. Both of these models are static models and are used to estimate exact measurements from measurements that are affected by error. In the engineering context, the variables of interest are considered to be taken at the time at which you observe it. The Bayesian time series analysis method of Dynamic Linear Models can be used to monitor the evolution of the measures, thus introducing a dynamic approach to statistical calibration. The research presented employs a new approach to performing statistical calibration. A simulation study in the context of microwave radiometry is conducted that compares the dynamic model to traditional static frequentist and Bayesian approaches. The focus of the study is to understand how well the dynamic statistical calibration method performs under various signal-to-noise ratios, r.  相似文献   

9.
Many important variables in business and economics are neither measured nor measurable but are simply defined in terms of other measured variables. For instance, the real interest rate is defined as the difference between the nominal interest rate and the inflation rate. There are two ways to forecast a defined variable: one can directly forecast the variable itself, or one can derive the forecast of the defined variable indirectly from the forecasts of the constituent variables. Using Box-Jenkins univariate time series analysis for four defined variables—real interest rate, money multiplier, real GNP, and money velocity—the forecasting accuracy of the two methods is compared. The results show that indirect forecasts tend to outperform direct methods for these defined variables.  相似文献   

10.
I consider the design of multistage sampling schemes for epidemiologic studies involving latent variable models, with surrogate measurements of the latent variables on a subset of subjects. Such models arise in various situations: when detailed exposure measurements are combined with variables that can be used to assign exposures to unmeasured subjects; when biomarkers are obtained to assess an unobserved pathophysiologic process; or when additional information is to be obtained on confounding or modifying variables. In such situations, it may be possible to stratify the subsample on data available for all subjects in the main study, such as outcomes, exposure predictors, or geographic locations. Three circumstances where analytic calculations of the optimal design are possible are considered: (i) when all variables are binary; (ii) when all are normally distributed; and (iii) when the latent variable and its measurement are normally distributed, but the outcome is binary. In each of these cases, it is often possible to considerably improve the cost efficiency of the design by appropriate selection of the sampling fractions. More complex situations arise when the data are spatially distributed: the spatial correlation can be exploited to improve exposure assignment for unmeasured locations using available measurements on neighboring locations; some approaches for informative selection of the measurement sample using location and/or exposure predictor data are considered.  相似文献   

11.
Joint damage in psoriatic arthritis can be measured by clinical and radiological methods, the former being done more frequently during longitudinal follow-up of patients. Motivated by the need to compare findings based on the different methods with different observation patterns, we consider longitudinal data where the outcome variable is a cumulative total of counts that can be unobserved when other, informative, explanatory variables are recorded. We demonstrate how to calculate the likelihood for such data when it is assumed that the increment in the cumulative total follows a discrete distribution with a location parameter that depends on a linear function of explanatory variables. An approach to the incorporation of informative observation is suggested. We present analyses based on an observational database from a psoriatic arthritis clinic. Although the use of the new statistical methodology has relatively little effect in this example, simulation studies indicate that the method can provide substantial improvements in bias and coverage in some situations where there is an important time varying explanatory variable.  相似文献   

12.
Background: In age‐related macular degeneration (ARMD) trials, the FDA‐approved endpoint is the loss (or gain) of at least three lines of vision as compared to baseline. The use of such a response endpoint entails a potentially severe loss of information. A more efficient strategy could be obtained by using longitudinal measures of the change in visual acuity. In this paper we investigate, by using data from two randomized clinical trials, the mean and variance–covariance structures of the longitudinal measurements of the change in visual acuity. Methods: Individual patient data were collected in 234 patients in a randomized trial comparing interferon‐ α with placebo and in 1181 patients in a randomized trial comparing three active doses of pegaptanib with sham. A linear model for longitudinal data was used to analyze the repeated measurements of the change in visual acuity. Results: For both trials, the data were adequately summarized by a model that assumed a quadratic trend for the mean change in visual acuity over time, a power variance function, and an antedependence correlation structure. The power variance function was remarkably similar for the two datasets and involved the square root of the measurement time. Conclusions: The similarity of the estimated variance functions and correlation structures for both datasets indicates that these aspects may be a genuine feature of the measurements of changes in visual acuity in patients with ARMD. The feature can be used in the planning and analysis of trials that use visual acuity as the clinical endpoint of interest. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

13.
Minimisation is a method often used in clinical trials to balance the treatment groups with respect to some prognostic factors. In the case of two treatments, the predictability of this method is calculated for different numbers of factors, different numbers of levels of each factor and for different proportions of the population at each level. It is shown that if we know nothing about the previous patients except the last treatment allocation, the next treatment can be correctly guessed more than 60% of the time if no biased coin is used. If the two previous assignments are known to have been the same, the next treatment can be guessed correctly around 80% of the time. Therefore, it is suggested that a biased coin should always be used with minimisation. Different choices of biased coin are investigated in terms of the reduction in predictability and the increase in imbalance that they produce. An alternative design to minimisation which makes use of optimum design theory is also investigated, by means of simulation, and does not appear to have any clear advantages over minimisation with a biased coin.  相似文献   

14.
In many experiments, several measurements on the same variable are taken over time, a geographic region, or some other index set. It is often of interest to know if there has been a change over the index set in the parameters of the distribution of the variable. Frequently, the data consist of a sequence of correlated random variables, and there may also be several experimental units under observation, each providing a sequence of data. A problem in ascertaining the boundaries between the layers in geological sedimentary beds is used to introduce the model and then to illustrate the proposed methodology. It is assumed that, conditional on the change point, the data from each sequence arise from an autoregressive process that undergoes a change in one or more of its parameters. Unconditionally, the model then becomes a mixture of nonstationary autoregressive processes. Maximum-likelihood methods are used, and results of simulations to evaluate the performance of these estimators under practical conditions are given.  相似文献   

15.
Noninferiority trials intend to show that a new treatment is ‘not worse'' than a standard-of-care active control and can be used as an alternative when it is likely to cause fewer side effects compared to the active control. In the case of time-to-event endpoints, existing methods of sample size calculation are done either assuming proportional hazards between the two study arms, or assuming exponentially distributed lifetimes. In scenarios where these assumptions are not true, there are few reliable methods for calculating the sample sizes for a time-to-event noninferiority trial. Additionally, the choice of the non-inferiority margin is obtained either from a meta-analysis of prior studies, or strongly justifiable ‘expert opinion'', or from a ‘well conducted'' definitive large-sample study. Thus, when historical data do not support the traditional assumptions, it would not be appropriate to use these methods to design a noninferiority trial. For such scenarios, an alternate method of sample size calculation based on the assumption of Proportional Time is proposed. This method utilizes the generalized gamma ratio distribution to perform the sample size calculations. A practical example is discussed, followed by insights on choice of the non-inferiority margin, and the indirect testing of superiority of treatment compared to placebo.KEYWORDS: Generalized gamma, noninferiority, non-proportional hazards, proportional time, relative time, sample size  相似文献   

16.
Generalized linear models are commonly used to analyze categorical data such as binary, count, and ordinal outcomes. Adjusting for important prognostic factors or baseline covariates in generalized linear models may improve the estimation efficiency. The model‐based mean for a treatment group produced by most software packages estimates the response at the mean covariate, not the mean response for this treatment group for the studied population. Although this is not an issue for linear models, the model‐based group mean estimates in generalized linear models could be seriously biased for the true group means. We propose a new method to estimate the group mean consistently with the corresponding variance estimation. Simulation showed the proposed method produces an unbiased estimator for the group means and provided the correct coverage probability. The proposed method was applied to analyze hypoglycemia data from clinical trials in diabetes. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

17.
In many clinical studies where time to failure is of primary interest, patients may fail or die from one of many causes where failure time can be right censored. In some circumstances, it might also be the case that patients are known to die but the cause of death information is not available for some patients. Under the assumption that cause of death is missing at random, we compare the Goetghebeur and Ryan (1995, Biometrika, 82, 821–833) partial likelihood approach with the Dewanji (1992, Biometrika, 79, 855–857)partial likelihood approach. We show that the estimator for the regression coefficients based on the Dewanji partial likelihood is not only consistent and asymptotically normal, but also semiparametric efficient. While the Goetghebeur and Ryan estimator is more robust than the Dewanji partial likelihood estimator against misspecification of proportional baseline hazards, the Dewanji partial likelihood estimator allows the probability of missing cause of failure to depend on covariate information without the need to model the missingness mechanism. Tests for proportional baseline hazards are also suggested and a robust variance estimator is derived.  相似文献   

18.
In the absence of placebo‐controlled trials, the efficacy of a test treatment can be alternatively examined by showing its non‐inferiority to an active control; that is, the test treatment is not worse than the active control by a pre‐specified margin. The margin is based on the effect of the active control over placebo in historical studies. In other words, the non‐inferiority setup involves a network of direct and indirect comparisons between test treatment, active controls, and placebo. Given this framework, we consider a Bayesian network meta‐analysis that models the uncertainty and heterogeneity of the historical trials into the non‐inferiority trial in a data‐driven manner through the use of the Dirichlet process and power priors. Depending on whether placebo was present in the historical trials, two cases of non‐inferiority testing are discussed that are analogs of the synthesis and fixed‐margin approach. In each of these cases, the model provides a more reliable estimate of the control given its effect in other trials in the network, and, in the case where placebo was only present in the historical trials, the model can predict the effect of the test treatment over placebo as if placebo had been present in the non‐inferiority trial. It can further answer other questions of interest, such as comparative effectiveness of the test treatment among its comparators. More importantly, the model provides an opportunity for disproportionate randomization or the use of small sample sizes by allowing borrowing of information from a network of trials to draw explicit conclusions on non‐inferiority. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

19.
Multi‐country randomised clinical trials (MRCTs) are common in the medical literature, and their interpretation has been the subject of extensive recent discussion. In many MRCTs, an evaluation of treatment effect homogeneity across countries or regions is conducted. Subgroup analysis principles require a significant test of interaction in order to claim heterogeneity of treatment effect across subgroups, such as countries in an MRCT. As clinical trials are typically underpowered for tests of interaction, overly optimistic expectations of treatment effect homogeneity can lead researchers, regulators and other stakeholders to over‐interpret apparent differences between subgroups even when heterogeneity tests are insignificant. In this paper, we consider some exploratory analysis tools to address this issue. We present three measures derived using the theory of order statistics, which can be used to understand the magnitude and the nature of the variation in treatment effects that can arise merely as an artefact of chance. These measures are not intended to replace a formal test of interaction but instead provide non‐inferential visual aids, which allow comparison of the observed and expected differences between regions or other subgroups and are a useful supplement to a formal test of interaction. We discuss how our methodology differs from recently published methods addressing the same issue. A case study of our approach is presented using data from the Study of Platelet Inhibition and Patient Outcomes (PLATO), which was a large cardiovascular MRCT that has been the subject of controversy in the literature. An R package is available that implements the proposed methods. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

20.
Longitudinal clinical trials with long follow-up periods almost invariably suffer from a loss to follow-up and non-compliance with the assigned therapy. An example is protocol 128 of the AIDS Clinical Trials Group, a 5-year equivalency trial comparing reduced dose zidovudine with the standard dose for treatment of paediatric acquired immune deficiency syndrome patients. This study compared responses to treatment by using both clinical and cognitive outcomes. The cognitive outcomes are of particular interest because the effects of human immunodeficiency virus infection of the central nervous system can be more acute in children than in adults. We formulate and apply a Bayesian hierarchical model to estimate both the intent-to-treat effect and the average causal effect of reducing the prescribed dose of zidovudine by 50%. The intent-to-treat effect quantifies the causal effect of assigning the lower dose, whereas the average causal effect represents the causal effect of actually taking the lower dose. We adopt a potential outcomes framework where, for each individual, we assume the existence of a different potential outcomes process at each level of time spent on treatment. The joint distribution of the potential outcomes and the time spent on assigned treatment is formulated using a hierarchical model: the potential outcomes distribution is given at the first level, and dependence between the outcomes and time on treatment is specified at the second level by linking the time on treatment to subject-specific effects that characterize the potential outcomes processes. Several distributional and structural assumptions are used to identify the model from observed data, and these are described in detail. A detailed analysis of AIDS Clinical Trials Group protocol 128 is given; inference about both the intent-to-treat effect and average causal effect indicate a high probability of dose equivalence with respect to cognitive functioning.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号