首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Bayesian methods are increasingly used in proof‐of‐concept studies. An important benefit of these methods is the potential to use informative priors, thereby reducing sample size. This is particularly relevant for treatment arms where there is a substantial amount of historical information such as placebo and active comparators. One issue with using an informative prior is the possibility of a mismatch between the informative prior and the observed data, referred to as prior‐data conflict. We focus on two methods for dealing with this: a testing approach and a mixture prior approach. The testing approach assesses prior‐data conflict by comparing the observed data to the prior predictive distribution and resorting to a non‐informative prior if prior‐data conflict is declared. The mixture prior approach uses a prior with a precise and diffuse component. We assess these approaches for the normal case via simulation and show they have some attractive features as compared with the standard one‐component informative prior. For example, when the discrepancy between the prior and the data is sufficiently marked, and intuitively, one feels less certain about the results, both the testing and mixture approaches typically yield wider posterior‐credible intervals than when there is no discrepancy. In contrast, when there is no discrepancy, the results of these approaches are typically similar to the standard approach. Whilst for any specific study, the operating characteristics of any selected approach should be assessed and agreed at the design stage; we believe these two approaches are each worthy of consideration. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

2.
Abstract. In geophysical and environmental problems, it is common to have multiple variables of interest measured at the same location and time. These multiple variables typically have dependence over space (and/or time). As a consequence, there is a growing interest in developing models for multivariate spatial processes, in particular, the cross‐covariance models. On the other hand, many data sets these days cover a large portion of the Earth such as satellite data, which require valid covariance models on a globe. We present a class of parametric covariance models for multivariate processes on a globe. The covariance models are flexible in capturing non‐stationarity in the data yet computationally feasible and require moderate numbers of parameters. We apply our covariance model to surface temperature and precipitation data from an NCAR climate model output. We compare our model to the multivariate version of the Matérn cross‐covariance function and models based on coregionalization and demonstrate the superior performance of our model in terms of AIC (and/or maximum loglikelihood values) and predictive skill. We also present some challenges in modelling the cross‐covariance structure of the temperature and precipitation data. Based on the fitted results using full data, we give the estimated cross‐correlation structure between the two variables.  相似文献   

3.
A diverse range of non‐cardiovascular drugs are associated with QT interval prolongation, which may be associated with a potentially fatal ventricular arrhythmia known as torsade de pointes. QT interval has been assessed for two recent submissions at GlaxoSmithKline. Meta‐analyses of ECG data from several clinical pharmacology studies were conducted for the two submissions. A general fixed effects meta‐analysis approach using summaries of the individual studies was used to calculate a pooled estimate and 90% confidence interval for the difference between each active dose and placebo following both single and repeat dosing separately. The meta‐analysis approach described provided a pragmatic solution to pooling complex and varied studies, and is a good way of addressing regulatory questions on QTc prolongation. Copyright © 2002 John Wiley & Sons, Ltd.  相似文献   

4.
Motivated by the need to analyze the National Longitudinal Surveys data, we propose a new semiparametric longitudinal mean‐covariance model in which the effects on dependent variable of some explanatory variables are linear and others are non‐linear, while the within‐subject correlations are modelled by a non‐stationary autoregressive error structure. We develop an estimation machinery based on least squares technique by approximating non‐parametric functions via B‐spline expansions and establish the asymptotic normality of parametric estimators as well as the rate of convergence for the non‐parametric estimators. We further advocate a new model selection strategy in the varying‐coefficient model framework, for distinguishing whether a component is significant and subsequently whether it is linear or non‐linear. Besides, the proposed method can also be employed for identifying the true order of lagged terms consistently. Monte Carlo studies are conducted to examine the finite sample performance of our approach, and an application of real data is also illustrated.  相似文献   

5.
The author extends to the Bayesian nonparametric context the multinomial goodness‐of‐fit tests due to Cressie & Read (1984). Her approach is suitable when the model of interest is a discrete distribution. She provides an explicit form for the tests, which are based on power‐divergence measures between a prior Dirichlet process that is highly concentrated around the model of interest and the corresponding posterior Dirichlet process. In addition to providing interesting special cases and useful approximations, she discusses calibration and the choice of test through examples.  相似文献   

6.
In this paper, a simulation study is conducted to systematically investigate the impact of different types of missing data on six different statistical analyses: four different likelihood‐based linear mixed effects models and analysis of covariance (ANCOVA) using two different data sets, in non‐inferiority trial settings for the analysis of longitudinal continuous data. ANCOVA is valid when the missing data are completely at random. Likelihood‐based linear mixed effects model approaches are valid when the missing data are at random. Pattern‐mixture model (PMM) was developed to incorporate non‐random missing mechanism. Our simulations suggest that two linear mixed effects models using unstructured covariance matrix for within‐subject correlation with no random effects or first‐order autoregressive covariance matrix for within‐subject correlation with random coefficient effects provide well control of type 1 error (T1E) rate when the missing data are completely at random or at random. ANCOVA using last observation carried forward imputed data set is the worst method in terms of bias and T1E rate. PMM does not show much improvement on controlling T1E rate compared with other linear mixed effects models when the missing data are not at random but is markedly inferior when the missing data are at random. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

7.
In the context of vaccine efficacy trial where the incidence rate is very low and a very large sample size is usually expected, incorporating historical data into a new trial is extremely attractive to reduce sample size and increase estimation precision. Nevertheless, for some infectious diseases, seasonal change in incidence rates poses a huge challenge in borrowing historical data and a critical question is how to properly take advantage of historical data borrowing with acceptable tolerance to between-trials heterogeneity commonly from seasonal disease transmission. In this article, we extend a probability-based power prior which determines the amount of information to be borrowed based on the agreement between the historical and current data, to make it applicable for either a single or multiple historical trials available, with constraint on the amount of historical information to be borrowed. Simulations are conducted to compare the performance of the proposed method with other methods including modified power prior (MPP), meta-analytic-predictive (MAP) prior and the commensurate prior methods. Furthermore, we illustrate the application of the proposed method for trial design in a practical setting.  相似文献   

8.
Networks of constellations of longitudinal observational databases, often electronic medical records or transactional insurance claims or both, are increasingly being used for studying the effects of medicinal products in real‐world use. Such databases are frequently configured as distributed networks. That is, patient‐level data are kept behind firewalls and not communicated outside of the data vendor other than in aggregate form. Instead, data are standardized across the network, and queries of the network are executed locally by data partners, and summary results provided to a central research partner(s) for amalgamation, aggregation, and summarization. Such networks can be huge covering years of data on upwards of 100 million patients. Examples of such networks include the FDA Sentinel Network, ASPEN, CNODES, and EU‐ADR. As this is a new emerging field, we note in this paper the conceptual similarities and differences between the analysis of distributed networks and the now well‐established field of meta‐analysis of randomized clinical trials (RCTs). We recommend, wherever appropriate, to apply learnings from meta‐analysis to help guide the development of distributed network analyses of longitudinal observational databases.  相似文献   

9.
A sample size justification is a vital part of any trial design. However, estimating the number of participants required to give a meaningful result is not always straightforward. A number of components are required to facilitate a suitable sample size calculation. In this paper, the steps for conducting sample size calculations for non‐inferiority and equivalence trials are summarised. Practical advice and examples are provided that illustrate how to carry out the calculations by hand and using the app SampSize. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

10.
It is shown that fixed‐effect meta‐analyses of naïve treatment estimates from sequentially run trials with the possibility of stopping for efficacy based on a single interim look are unbiassed (or at the very least consistent, depending on the point of view) provided that the trials are weighted by information provided. A simple proof of this is given. An argument is given suggesting that this also applies in the case of multiple looks. The implications for this are discussed. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

11.
Abstract. Motivated by applications of Poisson processes for modelling periodic time‐varying phenomena, we study a semi‐parametric estimator of the period of cyclic intensity function of a non‐homogeneous Poisson process. There are no parametric assumptions on the intensity function which is treated as an infinite dimensional nuisance parameter. We propose a new family of estimators for the period of the intensity function, address the identifiability and consistency issues and present simulations which demonstrate good performance of the proposed estimation procedure in practice. We compare our method to competing methods on synthetic data and apply it to a real data set from a call center.  相似文献   

12.
13.
Prior information is often incorporated informally when planning a clinical trial. Here, we present an approach on how to incorporate prior information, such as data from historical clinical trials, into the nuisance parameter–based sample size re‐estimation in a design with an internal pilot study. We focus on trials with continuous endpoints in which the outcome variance is the nuisance parameter. For planning and analyzing the trial, frequentist methods are considered. Moreover, the external information on the variance is summarized by the Bayesian meta‐analytic‐predictive approach. To incorporate external information into the sample size re‐estimation, we propose to update the meta‐analytic‐predictive prior based on the results of the internal pilot study and to re‐estimate the sample size using an estimator from the posterior. By means of a simulation study, we compare the operating characteristics such as power and sample size distribution of the proposed procedure with the traditional sample size re‐estimation approach that uses the pooled variance estimator. The simulation study shows that, if no prior‐data conflict is present, incorporating external information into the sample size re‐estimation improves the operating characteristics compared to the traditional approach. In the case of a prior‐data conflict, that is, when the variance of the ongoing clinical trial is unequal to the prior location, the performance of the traditional sample size re‐estimation procedure is in general superior, even when the prior information is robustified. When considering to include prior information in sample size re‐estimation, the potential gains should be balanced against the risks.  相似文献   

14.
Consider a non‐parametric regression model Y =m (X )+ϵ , where m is an unknown regression function, Y is a real‐valued response variable, X is a real covariate, and ϵ is the error term. In this article, we extend the usual tests for homoscedasticity by developing consistent tests for independence between X and ϵ . Further, we investigate the local power of the proposed tests using Le Cam's contiguous alternatives. An asymptotic power study under local alternatives along with extensive finite sample simulation study shows that the performance of the new tests is competitive with existing ones. Furthermore, the practicality of the new tests is shown using two real data sets.  相似文献   

15.
Time‐to‐event data have been extensively studied in many areas. Although multiple time scales are often observed, commonly used methods are based on a single time scale. Analysing time‐to‐event data on two time scales can offer a more extensive insight into the phenomenon. We introduce a non‐parametric Bayesian intensity model to analyse two‐dimensional point process on Lexis diagrams. After a simple discretization of the two‐dimensional process, we model the intensity by a one‐dimensional piecewise constant hazard functions parametrized by the change points and corresponding hazard levels. Our prior distribution incorporates a built‐in smoothing feature in two dimensions. We implement posterior simulation using the reversible jump Metropolis–Hastings algorithm and demonstrate the applicability of the method using both simulated and empirical survival data. Our approach outperforms commonly applied models by borrowing strength in two dimensions.  相似文献   

16.
Missing variances, on the basis of the summary-level data, can be a problem when an inverse variance weighted meta-analysis is undertaken. A wide range of approaches in dealing with this issue exist, such as excluding data without a variance measure, using a function of sample size as a weight and imputing the missing standard errors/deviations. A non-linear mixed effects modelling approach was taken to describe the time-course of standard deviations across 14 studies. The model was then used to make predictions of the missing standard deviations, thus, enabling a precision weighted model-based meta-analysis of a mean pain endpoint over time. Maximum likelihood and Bayesian approaches were implemented with example code to illustrate how this imputation can be carried out and to compare the output from each method. The resultant imputations were nearly identical for the two approaches. This modelling approach acknowledges the fact that standard deviations are not necessarily constant over time and can differ between treatments and across studies in a predictable way.  相似文献   

17.
When recruitment into a clinical trial is limited due to rarity of the disease of interest, or when recruitment to the control arm is limited due to ethical reasons (eg, pediatric studies or important unmet medical need), exploiting historical controls to augment the prospectively collected database can be an attractive option. Statistical methods for combining historical data with randomized data, while accounting for the incompatibility between the two, have been recently proposed and remain an active field of research. The current literature is lacking a rigorous comparison between methods but also guidelines about their use in practice. In this paper, we compare the existing methods based on a confirmatory phase III study design exercise done for a new antibacterial therapy with a binary endpoint and a single historical dataset. A procedure to assess the relative performance of the different methods for borrowing information from historical control data is proposed, and practical questions related to the selection and implementation of methods are discussed. Based on our examination, we found that the methods have a comparable performance, but we recommend the robust mixture prior for its ease of implementation.  相似文献   

18.
Priors are introduced into goodness‐of‐fit tests, both for unknown parameters in the tested distribution and on the alternative density. Neyman–Pearson theory leads to the test with the highest expected power. To make the test practical, we seek priors that make it likely a priori that the power will be larger than the level of the test but not too close to one. As a result, priors are sample size dependent. We explore this procedure in particular for priors that are defined via a Gaussian process approximation for the logarithm of the alternative density. In the case of testing for the uniform distribution, we show that the optimal test is of the U‐statistic type and establish limiting distributions for the optimal test statistic, both under the null hypothesis and averaged over the alternative hypotheses. The optimal test statistic is shown to be of the Cramér–von Mises type for specific choices of the Gaussian process involved. The methodology when parameters in the tested distribution are unknown is discussed and illustrated in the case of testing for the von Mises distribution. The Canadian Journal of Statistics 47: 560–579; 2019 © 2019 Statistical Society of Canada  相似文献   

19.
Empirical Bayes is a versatile approach to “learn from a lot” in two ways: first, from a large number of variables and, second, from a potentially large amount of prior information, for example, stored in public repositories. We review applications of a variety of empirical Bayes methods to several well‐known model‐based prediction methods, including penalized regression, linear discriminant analysis, and Bayesian models with sparse or dense priors. We discuss “formal” empirical Bayes methods that maximize the marginal likelihood but also more informal approaches based on other data summaries. We contrast empirical Bayes to cross‐validation and full Bayes and discuss hybrid approaches. To study the relation between the quality of an empirical Bayes estimator and p, the number of variables, we consider a simple empirical Bayes estimator in a linear model setting. We argue that empirical Bayes is particularly useful when the prior contains multiple parameters, which model a priori information on variables termed “co‐data”. In particular, we present two novel examples that allow for co‐data: first, a Bayesian spike‐and‐slab setting that facilitates inclusion of multiple co‐data sources and types and, second, a hybrid empirical Bayes–full Bayes ridge regression approach for estimation of the posterior predictive interval.  相似文献   

20.
Clinical trials of experimental treatments must be designed with primary endpoints that directly measure clinical benefit for patients. In many disease areas, the recognised gold standard primary endpoint can take many years to mature, leading to challenges in the conduct and quality of clinical studies. There is increasing interest in using shorter‐term surrogate endpoints as substitutes for costly long‐term clinical trial endpoints; such surrogates need to be selected according to biological plausibility, as well as the ability to reliably predict the unobserved treatment effect on the long‐term endpoint. A number of statistical methods to evaluate this prediction have been proposed; this paper uses a simulation study to explore one such method in the context of time‐to‐event surrogates for a time‐to‐event true endpoint. This two‐stage meta‐analytic copula method has been extensively studied for time‐to‐event surrogate endpoints with one event of interest, but thus far has not been explored for the assessment of surrogates which have multiple events of interest, such as those incorporating information directly from the true clinical endpoint. We assess the sensitivity of the method to various factors including strength of association between endpoints, the quantity of data available, and the effect of censoring. In particular, we consider scenarios where there exist very little data on which to assess surrogacy. Results show that the two‐stage meta‐analytic copula method performs well under certain circumstances and could be considered useful in practice, but demonstrates limitations that may prevent universal use.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号