首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 187 毫秒
1.
Dynamic treatment strategies are designed to change treatments over time in response to intermediate outcomes. They can be deployed for primary treatment as well as for the introduction of adjuvant treatment or other treatment‐enhancing interventions. When treatment interventions are delayed until needed, more cost‐efficient strategies will result. Sequential multiple assignment randomized (SMAR) trials allow for unbiased estimation of the marginal effects of different sequences of history‐dependent treatment decisions. Because a single SMAR trial enables evaluation of many different dynamic regimes at once, it is naturally thought to require larger sample sizes than the parallel randomized trial. In this paper, we compare power between SMAR trials studying a regime, where treatment boosting enters when triggered by an observed event, versus the parallel design, where a treatment boost is consistently prescribed over the entire study period. In some settings, we found that the dynamic design yields the more efficient trial for the detection of treatment activity. We develop one particular trial to compare a dynamic nursing intervention with telemonitoring for the enhancement of medication adherence in epilepsy patients. To this end, we derive from the SMAR trial data either an average of conditional treatment effects (‘conditional estimator’) or the population‐averaged (‘marginal’) estimator of the dynamic regimes. Analytical sample size calculations for the parallel design and the conditional estimator are compared with simulated results for the population‐averaged estimator. We conclude that in specific settings, well‐chosen SMAR designs may require fewer data for the development of more cost‐efficient treatment strategies than parallel designs. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

2.
Abstract.  A dynamic regime provides a sequence of treatments that are tailored to patient-specific characteristics and outcomes. In 2004, James Robins proposed g –estimation using structural nested mean models (SNMMs) for making inference about the optimal dynamic regime in a multi-interval trial. The method provides clear advantages over traditional parametric approaches. Robins' g –estimation method always yields consistent estimators, but these can be asymptotically biased under a given SNMM for certain longitudinal distributions of the treatments and covariates, termed exceptional laws. In fact, under the null hypothesis of no treatment effect, every distribution constitutes an exceptional law under SNMMs which allow for interaction of current treatment with past treatments or covariates. This paper provides an explanation of exceptional laws and describes a new approach to g –estimation which we call Zeroing Instead of Plugging In (ZIPI). ZIPI provides nearly identical estimators to recursive g -estimators at non-exceptional laws while providing substantial reduction in the bias at an exceptional law when decision rule parameters are not shared across intervals.  相似文献   

3.
In clinical trials, continuous monitoring of event incidence rate plays a critical role in making timely decisions affecting trial outcome. For example, continuous monitoring of adverse events protects the safety of trial participants, while continuous monitoring of efficacy events helps identify early signals of efficacy or futility. Because the endpoint of interest is often the event incidence associated with a given length of treatment duration (e.g., incidence proportion of an adverse event with 2 years of dosing), assessing the event proportion before reaching the intended treatment duration becomes challenging, especially when the event onset profile evolves over time with accumulated exposure. In particular, in the earlier part of the study, ignoring censored subjects may result in significant bias in estimating the cumulative event incidence rate. Such a problem is addressed using a predictive approach in the Bayesian framework. In the proposed approach, experts' prior knowledge about both the frequency and timing of the event occurrence is combined with observed data. More specifically, during any interim look, each event‐free subject will be counted with a probability that is derived using prior knowledge. The proposed approach is particularly useful in early stage studies for signal detection based on limited information. But it can also be used as a tool for safety monitoring (e.g., data monitoring committee) during later stage trials. Application of the approach is illustrated using a case study where the incidence rate of an adverse event is continuously monitored during an Alzheimer's disease clinical trial. The performance of the proposed approach is also assessed and compared with other Bayesian and frequentist methods via simulation. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

4.
A geometrical interpretation of the classical tests of the relation between two sets of variables is presented. One of the variable sets may be considered as fixed and then we have a multivariate regression model. When the Wilks’ lambda distribution is viewed geometrically it is obvious that the two special cases, theF distribution and the HotellingT 2 distribution are equivalent. From the geometrical perspective it is also obvious that the test statistic and thep-value are unchanged if the responses and the predictors are interchanged.  相似文献   

5.
This paper considers the problem of modeling migraine severity assessments and their dependence on weather and time characteristics. We take on the viewpoint of a patient who is interested in an individual migraine management strategy. Since factors influencing migraine can differ between patients in number and magnitude, we show how a patient’s headache calendar reporting the severity measurements on an ordinal scale can be used to determine the dominating factors for this special patient. One also has to account for dependencies among the measurements. For this the autoregressive ordinal probit (AOP) model of Müller and Czado (J Comput Graph Stat 14: 320–338, 2005) is utilized and fitted to a single patient’s migraine data by a grouped move multigrid Monte Carlo (GM-MGMC) Gibbs sampler. Initially, covariates are selected using proportional odds models. Model fit and model comparison are discussed. A comparison with proportional odds specifications shows that the AOP models are preferred.  相似文献   

6.
A new test of the proportional hazards assumption in the Cox model is proposed. The idea is based on Neyman’s smooth tests. The Cox model with proportional hazards (i.e. time-constant covariate effects) is embedded in a model with a smoothly time-varying covariate effect that is expressed as a combination of some basis functions (e.g., Legendre polynomials, cosines). Then the smooth test is the score test for significance of these artificial covariates. Furthermore, we apply a modification of Schwarz’s selection rule to choosing the dimension of the smooth model (the number of the basis functions). The score test is then used in the selected model. In a simulation study, we compare the proposed tests with standard tests based on the score process.  相似文献   

7.
This paper investigates the applications of capture–recapture methods to human populations. Capture–recapture methods are commonly used in estimating the size of wildlife populations but can also be used in epidemiology and social sciences, for estimating prevalence of a particular disease or the size of the homeless population in a certain area. Here we focus on estimating the prevalence of infectious diseases. Several estimators of population size are considered: the Lincoln–Petersen estimator and its modified version, the Chapman estimator, Chao’s lower bound estimator, the Zelterman’s estimator, McKendrick’s moment estimator and the maximum likelihood estimator. In order to evaluate these estimators, they are applied to real, three-source, capture-recapture data. By conditioning on each of the sources of three source data, we have been able to compare the estimators with the true value that they are estimating. The Chapman and Chao estimators were compared in terms of their relative bias. A variance formula derived through conditioning is suggested for Chao’s estimator, and normal 95% confidence intervals are calculated for this and the Chapman estimator. We then compare the coverage of the respective confidence intervals. Furthermore, a simulation study is included to compare Chao’s and Chapman’s estimator. Results indicate that Chao’s estimator is less biased than Chapman’s estimator unless both sources are independent. Chao’s estimator has also the smaller mean squared error. Finally, the implications and limitations of the above methods are discussed, with suggestions for further development. We are grateful to the Medical Research Council for supporting this work.  相似文献   

8.
Clinical studies aimed at identifying effective treatments to reduce the risk of disease or death often require long term follow-up of participants in order to observe a sufficient number of events to precisely estimate the treatment effect. In such studies, observing the outcome of interest during follow-up may be difficult and high rates of censoring may be observed which often leads to reduced power when applying straightforward statistical methods developed for time-to-event data. Alternative methods have been proposed to take advantage of auxiliary information that may potentially improve efficiency when estimating marginal survival and improve power when testing for a treatment effect. Recently, Parast et al. (J Am Stat Assoc 109(505):384–394, 2014) proposed a landmark estimation procedure for the estimation of survival and treatment effects in a randomized clinical trial setting and demonstrated that significant gains in efficiency and power could be obtained by incorporating intermediate event information as well as baseline covariates. However, the procedure requires the assumption that the potential outcomes for each individual under treatment and control are independent of treatment group assignment which is unlikely to hold in an observational study setting. In this paper we develop the landmark estimation procedure for use in an observational setting. In particular, we incorporate inverse probability of treatment weights (IPTW) in the landmark estimation procedure to account for selection bias on observed baseline (pretreatment) covariates. We demonstrate that consistent estimates of survival and treatment effects can be obtained by using IPTW and that there is improved efficiency by using auxiliary intermediate event and baseline information. We compare our proposed estimates to those obtained using the Kaplan–Meier estimator, the original landmark estimation procedure, and the IPTW Kaplan–Meier estimator. We illustrate our resulting reduction in bias and gains in efficiency through a simulation study and apply our procedure to an AIDS dataset to examine the effect of previous antiretroviral therapy on survival.  相似文献   

9.
Recurrent event data occur in many clinical and observational studies (Cook and Lawless, Analysis of recurrent event data, 2007) and in these situations, there may exist a terminal event such as death that is related to the recurrent event of interest (Ghosh and Lin, Biometrics 56:554–562, 2000; Wang et al., J Am Stat Assoc 96:1057–1065, 2001; Huang and Wang, J Am Stat Assoc 99:1153–1165, 2004; Ye et al., Biometrics 63:78–87, 2007). In addition, sometimes there may exist more than one type of recurrent events, that is, one faces multivariate recurrent event data with some dependent terminal event (Chen and Cook, Biostatistics 5:129–143, 2004). It is apparent that for the analysis of such data, one has to take into account the dependence both among different types of recurrent events and between the recurrent and terminal events. In this paper, we propose a joint modeling approach for regression analysis of the data and both finite and asymptotic properties of the resulting estimates of unknown parameters are established. The methodology is applied to a set of bivariate recurrent event data arising from a study of leukemia patients.  相似文献   

10.
The paper looks at the problem of comparing two treatments, for a particular population of patients, where one is the current standard treatment and the other a possible alternative under investigation. With limited (finite) financial resources the decision whether to replace one by the other will not be based on health benefits alone. This motivates an economic evaluation of the two competing treatments where the cost of any gain in health benefit is scrutinized; it is whether this cost is acceptable to the relevant authorities which decides whether the new treatment can become the standard. We adopt a Bayesian decision theoretic framework in which a utility function is introduced describing the consequences of making a particular decision when the true state of nature is expressed via an unknown parameter θ (this parameter denotes cost, effectiveness, etc.). The treatment providing the maximum posterior expected utility summarizes the decision rule, expectations taken over the posterior distribution of the parameter θ.  相似文献   

11.
In many randomized clinical trials, the primary response variable, for example, the survival time, is not observed directly after the patients enroll in the study but rather observed after some period of time (lag time). It is often the case that such a response variable is missing for some patients due to censoring that occurs when the study ends before the patient’s response is observed or when the patients drop out of the study. It is often assumed that censoring occurs at random which is referred to as noninformative censoring; however, in many cases such an assumption may not be reasonable. If the missing data are not analyzed properly, the estimator or test for the treatment effect may be biased. In this paper, we use semiparametric theory to derive a class of consistent and asymptotically normal estimators for the treatment effect parameter which are applicable when the response variable is right censored. The baseline auxiliary covariates and post-treatment auxiliary covariates, which may be time-dependent, are also considered in our semiparametric model. These auxiliary covariates are used to derive estimators that both account for informative censoring and are more efficient then the estimators which do not consider the auxiliary covariates.  相似文献   

12.
In the present paper we are going to extend the likelihood ratio test to the case in which the available experimental information involves fuzzy imprecision (more precisely, the observable events associated with the random experiment concerning the test may be characterized as fuzzy subsets of the sample space, as intended by Zadeh, 1965). In addition, we will approximate the immediate intractable extension, which is based on Zadeh’s probabilistic definition, by using the minimum inaccuracy principle of estimation from fuzzy data, that has been introduced in previous papers as an operative extension of the maximum likelihood method.  相似文献   

13.
The conventional phase II trial design paradigm is to make the go/no-go decision based on the hypothesis testing framework. Statistical significance itself alone, however, may not be sufficient to establish that the drug is clinically effective enough to warrant confirmatory phase III trials. We propose the Bayesian optimal phase II trial design with dual-criterion decision making (BOP2-DC), which incorporates both statistical significance and clinical relevance into decision making. Based on the posterior probability that the treatment effect reaches the lower reference value (statistical significance) and the clinically meaningful value (clinical significance), BOP2-DC allows for go/consider/no-go decisions, rather than a binary go/no-go decision. BOP2-DC is highly flexible and accommodates various types of endpoints, including binary, continuous, time-to-event, multiple, and coprimary endpoints, in single-arm and randomized trials. The decision rule of BOP2-DC is optimized to maximize the probability of a go decision when the treatment is effective or minimize the expected sample size when the treatment is futile. Simulation studies show that the BOP2-DC design yields desirable operating characteristics. The software to implement BOP2-DC is freely available at www.trialdesign.org .  相似文献   

14.
Despite decades of research in the medical literature, assessment of the attributable mortality due to nosocomial infections in the intensive care unit (ICU) remains controversial, with different studies describing effect estimates ranging from being neutral to extremely risk increasing. Interpretation of study results is further hindered by inappropriate adjustment (a) for censoring of the survival time by discharge from the ICU, and (b) for time-dependent confounders on the causal path from infection to mortality. In previous work (Vansteelandt et al. Biostatistics 10:46–59), we have accommodated this through inverse probability of treatment and censoring weighting. Because censoring due to discharge from the ICU is so intimately connected with a patient’s health condition, the ensuing inverse weighting analyses suffer from influential weights and rely heavily on the assumption that one has measured all common risk factors of ICU discharge and mortality. In this paper, we consider ICU discharge as a competing risk in the sense that we aim to infer the risk of ‘ICU mortality’ over time that would be observed if nosocomial infections could be prevented for the entire study population. For this purpose we develop marginal structural subdistribution hazard models with accompanying estimation methods. In contrast to subdistribution hazard models with time-varying covariates, the proposed approach (a) can accommodate high-dimensional confounders, (b) avoids regression adjustment for post-infection measurements and thereby so-called collider-stratification bias, and (c) results in a well-defined model for the cumulative incidence function. The methods are used to quantify the causal effect of nosocomial pneumonia on ICU mortality using data from the National Surveillance Study of Nosocomial Infections in ICU’s (Belgium).  相似文献   

15.
Longitudinal health-related quality of life data arise naturally from studies of progressive and neurodegenerative diseases. In such studies, patients’ mental and physical conditions are measured over their follow-up periods and the resulting data are often complicated by subject-specific measurement times and possible terminal events associated with outcome variables. Motivated by the “Predictor’s Cohort” study on patients with advanced Alzheimer disease, we propose in this paper a semiparametric modeling approach to longitudinal health-related quality of life data. It builds upon and extends some recent developments for longitudinal data with irregular observation times. The new approach handles possibly dependent terminal events. It allows one to examine time-dependent covariate effects on the evolution of outcome variable and to assess nonparametrically change of outcome measurement that is due to factors not incorporated in the covariates. The usual large-sample properties for parameter estimation are established. In particular, it is shown that relevant parameter estimators are asymptotically normal and the asymptotic variances can be estimated consistently by the simple plug-in method. A general procedure for testing a specific parametric form in the nonparametric component is also developed. Simulation studies show that the proposed approach performs well for practical settings. The method is applied to the motivating example.  相似文献   

16.
Point processes are the stochastic models most suitable for describing physical phenomena that appear at irregularly spaced times, such as the earthquakes. These processes are uniquely characterized by their conditional intensity, that is, by the probability that an event will occur in the infinitesimal interval (t, t+Δt), given the history of the process up tot. The seismic phenomenon displays different behaviours on different time and size scales; in particular, the occurrence of destructive shocks over some centuries in a seismogenic region may be explained by the elastic rebound theory. This theory has inspired the so-called stress release models: their conditional intensity translates the idea that an earthquake produces a sudden decrease in the amount of strain accumulated gradually over time along a fault, and the subsequent event occurs when the stress exceeds the strength of the medium. This study has a double objective: the formulation of these models in the Bayesian framework, and the assignment to each event of a mark, that is its magnitude, modelled through a distribution that depends at timet on the stress level accumulated up to that instant. The resulting parameter space is constrained and dependent on the data, complicating Bayesian computation and analysis. We have resorted to Monte Carlo methods to solve these problems.  相似文献   

17.
Building new and flexible classes of nonseparable spatio-temporal covariances and variograms has resulted a key point of research in the last years. The goal of this paper is to present an up-to-date overview of recent spatio-temporal covariance models taking into account the problem of spatial anisotropy. The resulting structures are proved to have certain interesting mathematical properties, together with a considerable applicability. In particular, we focus on the problem of modelling anisotropy through isotropy within components. We present the Bernstein class, and a generalisation of Gneiting’s approach (2002a) to obtain new classes of space–time covariance functions which are spatially anisotropic. We also discuss some methods for building covariance functions that attain negative values. We finally present several differentiation and integration operators acting on particular space–time covariance classes.   相似文献   

18.
The Student’s t distribution has become increasingly prominent and is considered as a competitor to the normal distribution. Motivated by real examples in Physics, decision sciences and Bayesian statistics, a new t distribution is introduced by taking the product of two Student’s t pdfs. Various structural properties of this distribution are derived, including its cdf, moments, mean deviation about the mean, mean deviation about the median, entropy, asymptotic distribution of the extreme order statistics, maximum likelihood estimates and the Fisher information matrix. Finally, an application to a Bayesian testing problem is illustrated.  相似文献   

19.
Summary. A dynamic treatment regime is a list of decision rules, one per time interval, for how the level of treatment will be tailored through time to an individual's changing status. The goal of this paper is to use experimental or observational data to estimate decision regimes that result in a maximal mean response. To explicate our objective and to state the assumptions, we use the potential outcomes model. The method proposed makes smooth parametric assumptions only on quantities that are directly relevant to the goal of estimating the optimal rules. We illustrate the methodology proposed via a small simulation.  相似文献   

20.
Different arguments have been put forward why drug developers should commit themselves early for what they are planning to do for children. By EU regulation, paediatric investigation plans should be agreed on in early phases of drug development in adults. Here, extrapolation from adults to children is widely applied to reduce the burden and avoids unnecessary clinical trials in children, but early regulatory decisions on how far extrapolation can be used may be highly uncertain. Under special circumstances, the regulatory process should allow for adaptive paediatric investigation plans explicitly foreseeing a re‐evaluation of the early decision based on the information accumulated later from adults or elsewhere. A small step towards adaptivity and learning from experience may improve the quality of regulatory decisions in particular with regard to how much information can be borrowed from adults. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号