首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
A nested case–control (NCC) study is an efficient cohort-sampling design in which a subset of controls are sampled from the risk set at each event time. Since covariate measurements are taken only for the sampled subjects, time and efforts of conducting a full scale cohort study can be saved. In this paper, we consider fitting a semiparametric accelerated failure time model to failure time data from a NCC study. We propose to employ an efficient induced smoothing procedure for rank-based estimating method for regression parameters estimation. For variance estimation, we propose to use an efficient resampling method that utilizes the robust sandwich form. We extend our proposed methods to a generalized NCC study that allows a sampling of cases. Finite sample properties of the proposed estimators are investigated via an extensive stimulation study. An application to a tumor study illustrates the utility of the proposed method in routine data analysis.  相似文献   

2.
In this paper, we extend the use of assurance for a single study to explore how meeting a study's pre-defined success criteria could update our beliefs about the true treatment effect and impact the assurance of subsequent studies. This concept of conditional assurance, the assurance of a subsequent study conditional on success in an initial study, can be used assess the de-risking potential of the study requiring immediate investment, to ensure it provides value within the overall development plan. If the planned study does not discharge sufficient later phase risk, alternative designs and/or success criteria should be explored. By transparently laying out the different design options and the risks associated, this allows for decision makers to make quantitative investment choices based on their risk tolerance levels and potential return on investment. This paper lays out the derivation of conditional assurance, discusses how changing the design of a planned study will impact the conditional assurance of a future study, as well as presenting a simple illustrative example of how this methodology could be used to transparently compare development plans to aid decision making within an organisation.  相似文献   

3.
Pre‐study sample size calculations for clinical trial research protocols are now mandatory. When an investigator is designing a study to compare the outcomes of an intervention, an essential step is the calculation of sample sizes that will allow a reasonable chance (power) of detecting a pre‐determined difference (effect size) in the outcome variable, at a given level of statistical significance. Frequently studies will recruit fewer patients than the initial pre‐study sample size calculation suggested. Investigators are faced with the fact that their study may be inadequately powered to detect the pre‐specified treatment effect and the statistical analysis of the collected outcome data may or may not report a statistically significant result. If the data produces a “non‐statistically significant result” then investigators are frequently tempted to ask the question “Given the actual final study size, what is the power of the study, now, to detect a treatment effect or difference?” The aim of this article is to debate whether or not it is desirable to answer this question and to undertake a power calculation, after the data have been collected and analysed. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

4.
The American Statistical Association conducted a pilot study to develop methodology to conduct a nationwide evaluation of survey practices and the quality of survey data. This article is a report on the objectives and principal findings of that pilot study. In addition, the objectives of the nationwide study are presented.  相似文献   

5.
As described in the ICH E5 guidelines, a bridging study is an additional study executed in a new geographical region or subpopulation to link or “build a bridge” from global clinical trial outcomes to the new region. The regulatory and scientific goals of a bridging study is to evaluate potential subpopulation differences while minimizing duplication of studies and meeting unmet medical needs expeditiously. Use of historical data (borrowing) from global studies is an attractive approach to meet these conflicting goals. Here, we propose a practical and relevant approach to guide the optimal borrowing rate (percent of subjects in earlier studies) and the number of subjects in the new regional bridging study. We address the limitations in global/regional exchangeability through use of a Bayesian power prior method and then optimize bridging study design with a return on investment viewpoint. The method is demonstrated using clinical data from global and Japanese trials in dapagliflozin for type 2 diabetes.  相似文献   

6.
Consider assessing the evidence for an exposure variable and a disease variable being associated, when the true exposure variable is more costly to obtain than an error‐prone but nondifferential surrogate exposure variable. From a study design perspective, there are choices regarding the best use of limited resources. Should one acquire the true exposure status for fewer subjects or the surrogate exposure status for more subjects? The issue of validation is also central, i.e., should we simultaneously measure the true and surrogate exposure variables on a subset of study subjects? Using large‐sample theory, we provide a framework for quantifying the power of testing for an exposure–disease association as a function of study cost. This enables us to present comparisons of different study designs under different suppositions about both the relative cost and the performance (sensitivity and specificity) of the surrogate variable. We present simulations to show the applicability of our theoretical framework, and we provide a case‐study comparing results from an actual study to what could have been seen had true exposure status been ascertained for a different proportion of study subjects. We also describe an extension of our ideas to a more complex situation involving covariates. The Canadian Journal of Statistics 47: 222–237; 2019 © 2019 Statistical Society of Canada  相似文献   

7.
In monitoring clinical trials, the question of futility, or whether the data thus far suggest that the results at the final analysis are unlikely to be statistically successful, is regularly of interest over the course of a study. However, the opposite viewpoint of whether the study is sufficiently demonstrating proof of concept (POC) and should continue is a valuable consideration and ultimately should be addressed with high POC power so that a promising study is not prematurely terminated. Conditional power is often used to assess futility, and this article interconnects the ideas of assessing POC for the purpose of study continuation with conditional power, while highlighting the importance of the POC type I error and the POC type II error for study continuation or not at the interim analysis. Methods for analyzing subgroups motivate the interim analyses to maintain high POC power via an adjusted interim POC significance level criterion for study continuation or testing against an inferiority margin. Furthermore, two versions of conditional power based on the assumed effect size or the observed interim effect size are considered. Graphical displays illustrate the relationship of the POC type II error for premature study termination to the POC type I error for study continuation and the associated conditional power criteria.  相似文献   

8.
In designing a study to compare two lifetime distributions, decisions are required about the study size, the proportion of observations in each group and the length of follow-up period. These aspects of study design are examined using a Bayesian approach in which the expected consequences of a particular choice of design are evaluated by the expected gain in infornlation.  相似文献   

9.
With increased costs of drug development the need for efficient studies has become critical. A key decision point on the development pathway has become the proof of concept study. These studies must provide clear information to the project teams to enable decision making about further developing a drug candidate but also to gain evidence that any effect size is sufficient to warrant this development given the current market environment. Our case study outlines one such proof of concept trial where a new candidate therapy for neuropathic pain was investigated to assess dose-response and to evaluate the magnitude of its effect compared to placebo. A Normal Dynamic Linear Model was used to estimate the dose-response--enforcing some smoothness in the dose-response, but allowing for the fact that the dose-response may be non-monotonic. A pragmatic, parallel group study design was used with interim analyses scheduled to allow the sponsor to drop ineffective doses or to stop the study. Simulations were performed to assess the operating characteristics of the study design. The study results are presented. Significant cost savings were made when it transpired that the new candidate drug did not show superior efficacy when compared placebo and the study was stopped.  相似文献   

10.
In a typical carcinogenicity study, animals, usually rats or mice. are divided into a control and two to three dose groups of 50 or more by randomization. A chemical is administered at a constant daily dose rate for a major portion of the lifetime of the test animals, for example, two years. In general, such an experiment is expensive and time consuming In this paper, we propose an efficient design with reduced sample size and/or shortened study duration. An equal number of animals per dose group is considered in this study. A power study of the age-adjusted trend test, for the turnor incidence rate for single-sacrifice experiments proposed by Kodell et al. (Drug Information Journal, 1997) is conducted. A Monte Carlo simulation study is performed to compare the performance of the trend test for the standard design and various reduced designs. Based on the Kodell et al. test, the 21-month study duration with sample size 50 per group is recommended as the best, reduced design over the traditional 2-year study design with the same sample size.  相似文献   

11.
In the area of diagnostics, it is common practice to leverage external data to augment a traditional study of diagnostic accuracy consisting of prospectively enrolled subjects to potentially reduce the time and/or cost needed for the performance evaluation of an investigational diagnostic device. However, the statistical methods currently being used for such leveraging may not clearly separate study design and outcome data analysis, and they may not adequately address possible bias due to differences in clinically relevant characteristics between the subjects constituting the traditional study and those constituting the external data. This paper is intended to draw attention in the field of diagnostics to the recently developed propensity score-integrated composite likelihood approach, which originally focused on therapeutic medical products. This approach applies the outcome-free principle to separate study design and outcome data analysis and can mitigate bias due to imbalance in covariates, thereby increasing the interpretability of study results. While this approach was conceived as a statistical tool for the design and analysis of clinical studies for therapeutic medical products, here, we will show how it can also be applied to the evaluation of sensitivity and specificity of an investigational diagnostic device leveraging external data. We consider two common scenarios for the design of a traditional diagnostic device study consisting of prospectively enrolled subjects, which is to be augmented by external data. The reader will be taken through the process of implementing this approach step-by-step following the outcome-free principle that preserves study integrity.  相似文献   

12.
For any decision-making study, there are two sorts of errors that can be made, declaring a positive result when the truth is negative, and declaring a negative result when the truth is positive. Traditionally, the primary analysis of a study is a two-sided hypothesis test, the type I error rate will be set to 5% and the study is designed to give suitably low type II error – typically 10 or 20% – to detect a given effect size. These values are standard, arbitrary and, other than the choice between 10 and 20%, do not reflect the context of the study, such as the relative costs of making type I and II errors and the prior belief the drug will be placebo-like. Several authors have challenged this paradigm, typically for the scenario where the planned analysis is frequentist. When resource is limited, there will always be a trade-off between the type I and II error rates, and this article explores optimising this trade-off for a study with a planned Bayesian statistical analysis. This work provides a scientific basis for a discussion between stakeholders as to what type I and II error rates may be appropriate and some algebraic results for normally distributed data.  相似文献   

13.
Informative dropout is a vexing problem for any biomedical study. Most existing statistical methods attempt to correct estimation bias related to this phenomenon by specifying unverifiable assumptions about the dropout mechanism. We consider a cohort study in Africa that uses an outreach programme to ascertain the vital status for dropout subjects. These data can be used to identify a number of relevant distributions. However, as only a subset of dropout subjects were followed, vital status ascertainment was incomplete. We use semi‐competing risk methods as our analysis framework to address this specific case where the terminal event is incompletely ascertained and consider various procedures for estimating the marginal distribution of dropout and the marginal and conditional distributions of survival. We also consider model selection and estimation efficiency in our setting. Performance of the proposed methods is demonstrated via simulations, asymptotic study and analysis of the study data.  相似文献   

14.
Most of the research effort concerning the development and statistical study of capability indices has been devoted to normal processes. In this paper a statistical study of a capability index for non-normal processes proposed by Clements (1989) is developed. An approximate distribution for the natural estimator of the index is obtained from a distribution free point of view and a simulation study is used to compare it with its empirical distribution. An approximate conservative lower confidence limit for the index is also constructed.  相似文献   

15.
The causal assumptions, the study design and the data are the elements required for scientific inference in empirical research. The research is adequately communicated only if all of these elements and their relations are described precisely. Causal models with design describe the study design and the missing‐data mechanism together with the causal structure and allow the direct application of causal calculus in the estimation of the causal effects. The flow of the study is visualized by ordering the nodes of the causal diagram in two dimensions by their causal order and the time of the observation. Conclusions on whether a causal or observational relationship can be estimated from the collected incomplete data can be made directly from the graph. Causal models with design offer a systematic and unifying view to scientific inference and increase the clarity and speed of communication. Examples on the causal models for a case–control study, a nested case–control study, a clinical trial and a two‐stage case–cohort study are presented.  相似文献   

16.
The differential geometric framework of Amari (1982a, 1985) is applied to the study of some second order asymptotics related to the curvatures for exponential family nonlinear regression models, in which the observations are independent but not necessarily identically distributed. This paper presents a set of reasonable regularity conditions which are needed to study asymptotics from a geometric point of view in regression models. A new stochastic expansion of a first order efficient estimator is derived and used to study several asymptotic problems related to Fisher information in terms of curvatures. The bias and the covariance of the first order efficient estimator are also calculated according to the expansion.  相似文献   

17.
This article deals with the problem of Bayesian inference concerning the common scale parameter of several Pareto distributions. Bayesian hypothesis testing of, and Bayesian interval estimation for, the common scale parameter is given. Numerical studies including a comparison study, a simulation study, and a practical application study are given in order to illustrate our procedures and to demonstrate the performance, advantages, and merits of the Bayesian procedures over the classical and generalized variable procedures.  相似文献   

18.
Summary.  In studies to assess the accuracy of a screening test, often definitive disease assessment is too invasive or expensive to be ascertained on all the study subjects. Although it may be more ethical or cost effective to ascertain the true disease status with a higher rate in study subjects where the screening test or additional information is suggestive of disease, estimates of accuracy can be biased in a study with such a design. This bias is known as verification bias. Verification bias correction methods that accommodate screening tests with binary or ordinal responses have been developed; however, no verification bias correction methods exist for tests with continuous results. We propose and compare imputation and reweighting bias-corrected estimators of true and false positive rates, receiver operating characteristic curves and area under the receiver operating characteristic curve for continuous tests. Distribution theory and simulation studies are used to compare the proposed estimators with respect to bias, relative efficiency and robustness to model misspecification. The bias correction estimators proposed are applied to data from a study of screening tests for neonatal hearing loss.  相似文献   

19.
The bovine spongiform encephalopathy (BSE) maternal cohort study provided robust evidence of an enhanced risk of developing BSE for offspring of BSE-affected dams. We present for the first time, but in retrospect, an interim analysis of the BSE maternal cohort study and set it in historical context, some of which has only been revealed through the BSE inquiry. We also consider the implications for design of extending the BSE maternal cohort study once an enhanced risk to exposed calves had been established, to assess the risk to calves born further from the clinical onset of BSE in the dam than those in the original study. We demonstrate that, if a data monitoring committee had been established, conclusions similar to those based on the final results could have been drawn several years before the completion of the BSE maternal cohort study. Further, we conclude that an extension of the cohort study is unlikely to have been commissioned because of the substantial financial investment required, yet low power, and practical difficulties associated with implementation of any worthwhile extension.  相似文献   

20.
Dropout is a persistent problem for a longitudinal study. We exhibit the shortcomings of the last observation carried forward method. It produces biased estimates of change in an outcome from baseline to study endpoint under informative dropout. We developed a theoretical quantification of the effect of such bias on type I and type II error rates. We present results for a setup where a subject either completes the study or drops out during one particular interval, and also under the setup in which subjects could drop out at any time during the study. The type I error rate steadily increases when time to dropout decreases or the common sample size increases. The inflation in type I error rate can be substantially high when reasons for dropout in the two groups differ; when there is a large difference in dropout rates between the control and treatment groups and when the common sample size is large; even when dropout subjects have one or two fewer observations than the completers. Similar results are also observed for type II error rates. A study can have very low power when early recovered patients in the treatment group and worsening patients in the control group drop out even near the end of the study.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号