首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In this article, we study the problem of selecting the best population from among several exponential populations based on interval censored samples using a Bayesian approach. A Bayes selection procedure and a curtailed Bayes selection procedure are derived. We show that these two Bayes selection procedures are equivalent. A numerical example is provided to illustrate the application of the two selection procedure. We also use Monte Carlo simulation to study performance of the two selection procedures. The numerical results of the simulation study demonstrate that the curtailed Bayes selection procedure has good performance because it can substantially reduce the duration time of life test experiment.  相似文献   

2.
Multiple outcomes are increasingly used to assess chronic disease progression. We discuss and show how desirability functions can be used to assess a patient overall response to a treatment using multiple outcome measures and each of them may contribute unequally to the final assessment. Because judgments on disease progression and the relative contribution of each outcome can be subjective, we propose a data-driven approach to minimize the biases by using desirability functions with estimated shapes and weights based on a given gold standard. Our method provides each patient with a meaningful overall progression score that facilitates comparison and clinical interpretation. We also extend the methodology in a novel way to monitor patients’ disease progression when there are multiple time points and illustrate our method using a longitudinal data set from a randomized two-arm clinical trial for scleroderma patients.  相似文献   

3.
Seamless phase II/III clinical trials are conducted in two stages with treatment selection at the first stage. In the first stage, patients are randomized to a control or one of k > 1 experimental treatments. At the end of this stage, interim data are analysed, and a decision is made concerning which experimental treatment should continue to the second stage. If the primary endpoint is observable only after some period of follow‐up, at the interim analysis data may be available on some early outcome on a larger number of patients than those for whom the primary endpoint is available. These early endpoint data can thus be used for treatment selection. For two previously proposed approaches, the power has been shown to be greater for one or other method depending on the true treatment effects and correlations. We propose a new approach that builds on the previously proposed approaches and uses data available at the interim analysis to estimate these parameters and then, on the basis of these estimates, chooses the treatment selection method with the highest probability of correctly selecting the most effective treatment. This method is shown to perform well compared with the two previously described methods for a wide range of true parameter values. In most cases, the performance of the new method is either similar to or, in some cases, better than either of the two previously proposed methods. © 2014 The Authors. Pharmaceutical Statistics published by John Wiley & Sons Ltd.  相似文献   

4.
Consider a randomized trial in which time to the occurrence of a particular disease, say pneumocystis pneumonia in an AIDS trial or breast cancer in a mammographic screening trial, is the failure time of primary interest. Suppose that time to disease is subject to informative censoring by the minimum of time to death, loss to and end of follow-up. In such a trial, the censoring time is observed for all study subjects, including failures. In the presence of informative censoring, it is not possible to consistently estimate the effect of treatment on time to disease without imposing additional non-identifiable assumptions. The goals of this paper are to specify two non-identifiable assumptions that allow one to test for and estimate an effect of treatment on time to disease in the presence of informative censoring. In a companion paper (Robins, 1995), we provide consistent and reasonably efficient semiparametric estimators for the treatment effect under these assumptions. In this paper we largely restrict attention to testing. We propose tests that, like standard weighted-log-rank tests, are asymptotically distribution-free -level tests under the null hypothesis of no causal effect of treatment on time to disease whenever the censoring and failure distributions are conditionally independent given treatment arm. However, our tests remain asymptotically distribution-free -level tests in the presence of informative censoring provided either of our assumptions are true. In contrast, a weighted log-rank test will be an -level test in the presence of informative censoring only if (1) one of our two non-identifiable assumptions hold, and (2) the distribution of time to censoring is the same in the two treatment arms. We also extend our methods to studies of the effect of a treatment on the evolution over time of the mean of a repeated measures outcome, such as CD-4 count.  相似文献   

5.
When estimating treatment effect on count outcome of given population, one uses different models in different studies, resulting in non-comparable measures of treatment effect. Here we show that the marginal rate differences in these studies are comparable measures of treatment effect. We estimate the marginal rate differences by log-linear models and show that their finite-sample maximum-likelihood estimates are unbiased and highly robust with respect to effects of dispersing covariates on outcome. We get approximate finite-sample distributions of these estimates by using the asymptotic normal distribution of estimates of the log-linear model parameters. This method can be easily applied to practice.  相似文献   

6.
Selecting predictors to optimize the outcome prediction is an important statistical method. However, it usually ignores the false positives in the selected predictors. In this article, we advocate a conventional stepwise forward variable selection method based on the predicted residual sum of squares, and develop a positive false discovery rate (pFDR) estimate for the selected predictor subset, and a local pFDR estimate to prioritize the selected predictors. This pFDR estimate takes account of the existence of non null predictors, and is proved to be asymptotically conservative. In addition, we propose two views of a variable selection process: an overall and an individual test. An interesting feature of the overall test is that its power of selecting non null predictors increases with the proportion of non null predictors among all candidate predictors. Data analysis is illustrated with an example, in which genetic and clinical predictors were selected to predict the cholesterol level change after four months of tamoxifen treatment, and pFDR was estimated. Our method's performance is evaluated through statistical simulations.  相似文献   

7.
Randomized clinical trials with count measurements as the primary outcome are common in various medical areas such as seizure counts in epilepsy trials, or relapse counts in multiple sclerosis trials. Controlled clinical trials frequently use a conventional parallel-group design that assigns subjects randomly to one of two treatment groups and repeatedly evaluates them at baseline and intervals across a treatment period of a fixed duration. The primary interest is to compare the rates of change between treatment groups. Generalized estimating equations (GEEs) have been widely used to compare rates of change between treatment groups because of its robustness to misspecification of the true correlation structure. In this paper, we derive a sample size formula for comparing the rates of change between two groups in a repeatedly measured count outcome using GEE. The sample size formula incorporates general missing patterns such as independent missing and monotone missing, and general correlation structures such as AR(1) and compound symmetry (CS). The performance of the sample size formula is evaluated through simulation studies. Sample size estimation is illustrated by a clinical trial example from epilepsy.  相似文献   

8.
We propose an efficient group sequential monitoring rule for clinical trials. At each interim analysis both efficacy and futility are evaluated through a specified loss structure together with the predicted power. The proposed design is robust to a wide range of priors, and achieves the specified power with a saving of sample size compared to existing adaptive designs. A method is also proposed to obtain a reduced-bias estimator of treatment difference for the proposed design. The new approaches hold great potential for efficiently selecting a more effective treatment in comparative trials. Operating characteristics are evaluated and compared with other group sequential designs in empirical studies. An example is provided to illustrate the application of the method.  相似文献   

9.
Summary.  For a binary treatment ν =0, 1 and the corresponding 'potential response' Y 0 for the control group ( ν =0) and Y 1 for the treatment group ( ν =1), one definition of no treatment effect is that Y 0 and Y 1 follow the same distribution given a covariate vector X . Koul and Schick have provided a non-parametric test for no distributional effect when the realized response (1− ν ) Y 0+ ν Y 1 is fully observed and the distribution of X is the same across the two groups. This test is thus not applicable to censored responses, nor to non-experimental (i.e. observational) studies that entail different distributions of X across the two groups. We propose ' X -matched' non-parametric tests generalizing the test of Koul and Schick following an idea of Gehan. Our tests are applicable to non-experimental data with randomly censored responses. In addition to these motivations, the tests have several advantages. First, they have the intuitive appeal of comparing all available pairs across the treatment and control groups, instead of selecting a number of matched controls (or treated) in the usual pair or multiple matching. Second, whereas most matching estimators or tests have a non-overlapping support (of X ) problem across the two groups, our tests have a built-in protection against the problem. Third, Gehan's idea allows the tests to make good use of censored observations. A simulation study is conducted, and an empirical illustration for a job training effect on the duration of unemployment is provided.  相似文献   

10.
Consider a randomized trial in which time to the occurrence of a particular disease, say pneumocystic pneumonia in an AIDS trial or breast cancer in a mammographic screening trial, is the failure time of primary interest. Suppose that time to disease is subject to informative censoring by the minimum of time to death, loss to and end of follow-up. In such a trial, the potential censoring time is observed for all study subjects, including failures. In the presence of informative censoring, it is not possible to consistently estimate the effect of treatment on time to disease without imposing additional non-identifiable assumptions. Robins (1995) specified two non-identifiable assumptions that allow one to test for and estimate an effect of treatment on time to disease in the presence of informative censoring. The goal of this paper is to provide a class of consistent and reasonably efficient semiparametric tests and estimators for the treatment effect under these assumptions. The tests in our class, like standard weighted-log-rank tests, are asymptotically distribution-free -level tests under the null hypothesis of no causal effect of treatment on time to disease whenever the censoring and failure distributions are conditionally independent given treatment arm. However, our tests remain asymptotically distribution-free -level tests in the presence of informative censoring provided either of our assumptions are true. In contrast, a weighted log-rank test will be an -level test in the presence of informative censoring only if (1) one of our two non-identifiable assumptions hold, and (2) the distribution of time to censoring is the same in the two treatment arms. We also study the estimation, in the presence of informative censoring, of the effect of treatment on the evolution over time of the mean of repeated measures outcome such as CD4 count.  相似文献   

11.
Clinical trials of chronic, progressive conditions use rate of change on continuous measures as the primary outcome measure, with slowing of progression on the measure as evidence of clinical efficacy. For clinical trials with a single prespecified primary endpoint, it is important to choose an endpoint with the best signal‐to‐noise properties to optimize statistical power to detect a treatment effect. Composite endpoints composed of a linear weighted average of candidate outcome measures have also been proposed. Composites constructed as simple sums or averages of component tests, as well as composites constructed using weights derived from more sophisticated approaches, can be suboptimal, in some cases performing worse than individual outcome measures. We extend recent research on the construction of efficient linearly weighted composites by establishing the often overlooked connection between trial design and composite performance under linear mixed effects model assumptions and derive a formula for calculating composites that are optimal for longitudinal clinical trials of known, arbitrary design. Using data from a completed trial, we provide example calculations showing that the optimally weighted linear combination of scales can improve the efficiency of trials by almost 20% compared with the most efficient of the individual component scales. Additional simulations and analytical results demonstrate the potential losses in efficiency that can result from alternative published approaches to composite construction and explore the impact of weight estimation on composite performance. Copyright © 2016. The Authors. Pharmaceutical Statistics Published by John Wiley & Sons Ltd.  相似文献   

12.
Summary.  The lack of outcome measures that are validated for use on children limits the effectiveness and generalizability of paediatric health care interventions. Statistical epidemiology is a broad concept encompassing a wide range of useful techniques for use in child health outcome assessment and development. However, the range of techniques that are available is often confusing and prohibits their adoption. In the paper an overview of methodology is provided within the paediatric context. It is demonstrated that in many cases assessment can be performed relatively straightforwardly by using standard statistical techniques, although sometimes more sophisticated techniques are required. Examples of both physiological and questionnaire-based outcomes are given. The usefulness of these techniques is highlighted for achieving specific objectives and ultimately for achieving methodological rigour in clinical outcome studies that are performed in the paediatric population.  相似文献   

13.
We investigate mixed models for repeated measures data from cross-over studies in general, but in particular for data from thorough QT studies. We extend both the conventional random effects model and the saturated covariance model for univariate cross-over data to repeated measures cross-over (RMC) data; the resulting models we call the RMC model and Saturated model, respectively. Furthermore, we consider a random effects model for repeated measures cross-over data previously proposed in the literature. We assess the standard errors of point estimates and the coverage properties of confidence intervals for treatment contrasts under the various models. Our findings suggest: (i) Point estimates of treatment contrasts from all models considered are similar; (ii) Confidence intervals for treatment contrasts under the random effects model previously proposed in the literature do not have adequate coverage properties; the model therefore cannot be recommended for analysis of marginal QT prolongation; (iii) The RMC model and the Saturated model have similar precision and coverage properties; both models are suitable for assessment of marginal QT prolongation; and (iv) The Akaike Information Criterion (AIC) is not a reliable criterion for selecting a covariance model for RMC data in the following sense: the model with the smallest AIC is not necessarily associated with the highest precision for the treatment contrasts, even if the model with the smallest AIC value is also the most parsimonious model.  相似文献   

14.
In this paper, a review is given of various goodness-of-fit measures that have been proposed for the binary choice model in the last two decades. The relative behaviour of several pseudo-R2 measures is analysed in a series of misspecified binary choice models, the misspecification being omitted variables or an included irrelevant variable. A comparison is made with the OLS-R2 of the underlying latent variable model and with the squared sample correlation coefficient of the true and predicted probabilities. Further, it is investigated how the values of the measures change with a changing frequency rate of successes.  相似文献   

15.
In this paper, we consider the supplier selection problem, which deals with comparing two one-sided processes and selecting a better one that has a higher capability. We first review two existing approximation approaches, and an exact approach proposed which we refer to as the division method. We then develop a new exact approach called the subtraction method. We compare the two exact methods on the selection power. The results show that the proposed subtraction method is indeed more powerful than the division method. A two-phase selecting procedure is then developed based on the subtraction method for practical applications. Some computational results are tabulated for practitioners’ convenience.  相似文献   

16.
The prognosis for patients with high grade gliomas is poor, with a median survival of 1 year. Treatment efficacy assessment is typically unavailable until 5-6 months post diagnosis. Investigators hypothesize that quantitative magnetic resonance imaging can assess treatment efficacy 3 weeks after therapy starts, thereby allowing salvage treatments to begin earlier. The purpose of this work is to build a predictive model of treatment efficacy by using quantitative magnetic resonance imaging data and to assess its performance. The outcome is 1-year survival status. We propose a joint, two-stage Bayesian model. In stage I, we smooth the image data with a multivariate spatiotemporal pairwise difference prior. We propose four summary statistics that are functionals of posterior parameters from the first-stage model. In stage II, these statistics enter a generalized non-linear model as predictors of survival status. We use the probit link and a multivariate adaptive regression spline basis. Gibbs sampling and reversible jump Markov chain Monte Carlo methods are applied iteratively between the two stages to estimate the posterior distribution. Through both simulation studies and model performance comparisons we find that we can achieve higher overall correct classification rates by accounting for the spatiotemporal correlation in the images and by allowing for a more complex and flexible decision boundary provided by the generalized non-linear model.  相似文献   

17.
Nonparametric predictive inference (NPI) is a statistical approach based on few assumptions about probability distributions, with inferences based on data. NPI assumes exchangeability of random quantities, both related to observed data and future observations, and uncertainty is quantified using lower and upper probabilities. In this paper, units from several groups are placed simultaneously on a lifetime experiment and times-to-failure are observed. The experiment may be ended before all units have failed. Depending on the available data and few assumptions, we present lower and upper probabilities for selecting the best group, the subset of best groups and the subset including the best group. We also compare our approach of selecting the best group with some classical precedence selection methods. Throughout, examples are provided to demonstrate our method.  相似文献   

18.
Progression-free survival (PFS) is a frequently used endpoint in oncological clinical studies. In case of PFS, potential events are progression and death. Progressions are usually observed delayed as they can be diagnosed not before the next study visit. For this reason potential bias of treatment effect estimates for progression-free survival is a concern. In randomized trials and for relative treatment effects measures like hazard ratios, bias-correcting methods are not necessarily required or have been proposed before. However, less is known on cross-trial comparisons of absolute outcome measures like median survival times. This paper proposes a new method for correcting the assessment time bias of progression-free survival estimates to allow a fair cross-trial comparison of median PFS. Using median PFS for example, the presented method approximates the unknown posterior distribution by a Bayesian approach based on simulations. It is shown that the proposed method leads to a substantial reduction of bias as compared to estimates derived from maximum likelihood or Kaplan–Meier estimates. Bias could be reduced by more than 90% over a broad range of considered situations differing in assessment times and underlying distributions. By coverage probabilities of at least 94% based on the credibility interval of the posterior distribution the resulting parameters hold common confidence levels. In summary, the proposed approach is shown to be useful for a cross-trial comparison of median PFS.  相似文献   

19.
In 2002, nearly 200 nations signed up to the 2010 target of the Convention for Biological Diversity, ‘to significantly reduce the rate of biodiversity loss by 2010’. To assess whether the target was met, it became necessary to quantify temporal trends in measures of diversity. This resulted in a marked shift in focus for biodiversity measurement. We explore the developments in measuring biodiversity that was prompted by the 2010 target. We consider measures based on species proportions, and also explain why a geometric mean of relative abundance estimates was preferred to such measures for assessing progress towards the target. We look at the use of diversity profiles, and consider how species similarity can be incorporated into diversity measures. We also discuss measures of turnover that can be used to quantify shifts in community composition arising, for example, from climate change.  相似文献   

20.
Modern methods for detecting changes in the scale or covariance of multivariate distributions rely primarily on testing for the constancy of the covariance matrix. These depend on higher-order moment conditions, and also do not work well when the dimension of the data is large or even moderate relative to the sample size. In this paper, we propose a nonparametric change point test for multivariate data using rankings obtained from data depth measures. As the data depth of an observation measures its centrality relative to the sample, changes in data depth may signify a change of scale of the underlying distribution, and the proposed test is particularly responsive to detecting such changes. We provide a full asymptotic theory for the proposed test statistic under the null hypothesis that the observations are stable, and natural conditions under which the test is consistent. The finite sample properties are investigated by means of a Monte Carlo simulation, and these along with the theoretical results confirm that the test is robust to heavy tails, skewness and high dimensionality. The proposed methods are demonstrated with an application to structural break detection in the rate of change of pollutants linked to acid rain measured in Turkey lake, a lake in central Ontario, Canada. Our test suggests a change in the rate of acid rain in the late 1980s/early 1990s, which coincides with clean air legislation in Canada and the US. The Canadian Journal of Statistics 48: 417–446; 2020 © 2020 Statistical Society of Canada  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号