首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到6条相似文献,搜索用时 15 毫秒
1.
Response‐adaptive randomisation (RAR) can considerably improve the chances of a successful treatment outcome for patients in a clinical trial by skewing the allocation probability towards better performing treatments as data accumulates. There is considerable interest in using RAR designs in drug development for rare diseases, where traditional designs are not either feasible or ethically questionable. In this paper, we discuss and address a major criticism levelled at RAR: namely, type I error inflation due to an unknown time trend over the course of the trial. The most common cause of this phenomenon is changes in the characteristics of recruited patients—referred to as patient drift. This is a realistic concern for clinical trials in rare diseases due to their lengthly accrual rate. We compute the type I error inflation as a function of the time trend magnitude to determine in which contexts the problem is most exacerbated. We then assess the ability of different correction methods to preserve type I error in these contexts and their performance in terms of other operating characteristics, including patient benefit and power. We make recommendations as to which correction methods are most suitable in the rare disease context for several RAR rules, differentiating between the 2‐armed and the multi‐armed case. We further propose a RAR design for multi‐armed clinical trials, which is computationally efficient and robust to several time trends considered.  相似文献   

2.
Most data used to study the durations of unemployment spells come from the Current Population Survey (CPS), which is a point-in-time survey and gives an incomplete picture of the underlying duration distribution. We introduce a new sample of completed unemployment spells obtained from panel data and apply CPS sampling and reporting techniques to replicate the type of data used by other researchers. Predicted duration distributions derived from this CPS-like data are then compared to the actual distribution. We conclude that the best inferences that can be made about unemployment durations by using CPS-like data are seriously biased.  相似文献   

3.
This article investigates the merits of high-frequency intraday data when forming mean-variance efficient stock portfolios with daily rebalancing from the individual constituents of the S&P 100 index. We focus on the issue of determining the optimal sampling frequency as judged by the performance of these portfolios. The optimal sampling frequency ranges between 30 and 65 minutes, considerably lower than the popular five-minute frequency, which typically is motivated by the aim of striking a balance between the variance and bias in covariance matrix estimates due to market microstructure effects such as non-synchronous trading and bid-ask bounce. Bias-correction procedures, based on combining low-frequency and high-frequency covariance matrix estimates and on the addition of leads and lags do not substantially affect the optimal sampling frequency or the portfolio performance. Our findings are also robust to the presence of transaction costs and to the portfolio rebalancing frequency.  相似文献   

4.
Summary.  Latent class analysis has been used to model measurement error, to identify flawed survey questions and to estimate mode effects. Using data from a survey of University of Maryland alumni together with alumni records, we evaluate this technique to determine its usefulness for detecting bad questions in the survey context. Two sets of latent class analysis models are applied in this evaluation: latent class models with three indicators and latent class models with two indicators under different assumptions about prevalence and error rates. Our results indicated that the latent class analysis approach produced good qualitative results for the latent class models—the item that the model deemed the worst was the worst according to the true scores. However, the approach yielded weaker quantitative estimates of the error rates for a given item.  相似文献   

5.
The procedure suggested by DerSimonian and Laird is the simplest and most commonly used method for fitting the random effects model for meta-analysis. Here it is shown that, unless all studies are of similar size, this is inefficient when estimating the between-study variance, but is remarkably efficient when estimating the treatment effect. If formal inference is restricted to statements about the treatment effect, and the sample size is large, there is little point in implementing more sophisticated methodology. However, it is further demonstrated, for a simple special case, that use of the profile likelihood results in actual coverage probabilities for 95% confidence intervals that are closer to nominal levels for smaller sample sizes. Alternative methods for making inferences for the treatment effect may therefore be preferable if the sample size is small, but the DerSimonian and Laird procedure retains its usefulness for larger samples.  相似文献   

6.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号