首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 109 毫秒
1.
The aim of a phase II clinical trial is to decide whether or not to develop an experimental therapy further through phase III clinical evaluation. In this paper, we present a Bayesian approach to the phase II trial, although we assume that subsequent phase III clinical trials will have standard frequentist analyses. The decision whether to conduct the phase III trial is based on the posterior predictive probability of a significant result being obtained. This fusion of Bayesian and frequentist techniques accepts the current paradigm for expressing objective evidence of therapeutic value, while optimizing the form of the phase II investigation that leads to it. By using prior information, we can assess whether a phase II study is needed at all, and how much or what sort of evidence is required. The proposed approach is illustrated by the design of a phase II clinical trial of a multi‐drug resistance modulator used in combination with standard chemotherapy in the treatment of metastatic breast cancer. Copyright © 2005 John Wiley & Sons, Ltd  相似文献   

2.
The analysis of infectious disease data presents challenges arising from the dependence in the data and the fact that only part of the transmission process is observable. These difficulties are usually overcome by making simplifying assumptions. The paper explores the use of Markov chain Monte Carlo (MCMC) methods for the analysis of infectious disease data, with the hope that they will permit analyses to be made under more realistic assumptions. Two important kinds of data sets are considered, containing temporal and non-temporal information, from outbreaks of measles and influenza. Stochastic epidemic models are used to describe the processes that generate the data. MCMC methods are then employed to perform inference in a Bayesian context for the model parameters. The MCMC methods used include standard algorithms, such as the Metropolis–Hastings algorithm and the Gibbs sampler, as well as a new method that involves likelihood approximation. It is found that standard algorithms perform well in some situations but can exhibit serious convergence difficulties in others. The inferences that we obtain are in broad agreement with estimates obtained by other methods where they are available. However, we can also provide inferences for parameters which have not been reported in previous analyses.  相似文献   

3.

Consider the logistic linear model, with some explanatory variables overlooked. Those explanatory variables may be quantitative or qualitative. In either case, the resulting true response variable is not a binomial or a beta-binomial but a sum of binomials. Hence, standard computer packages for logistic regression can be inappropriate even if an overdispersion factor is incorporated. Therefore, a discrete exponential family assumption is considered to broaden the class of sampling models. Likelihood and Bayesian analyses are discussed. Bayesian computation techniques such as Laplacian approximations and Markov chain simulations are used to compute posterior densities and moments. Approximate conditional distributions are derived and are shown to be accurate. The Markov chain simulations are performed effectively to calculate posterior moments by using the approximate conditional distributions. The methodology is applied to Keeler's hardness of winter wheat data for checking binomial assumptions and to Matsumura's Accounting exams data for detailed likelihood and Bayesian analyses.  相似文献   

4.
Comparison with a standard is a general multiple comparison problem, where each system is required to be compared with a single system, referred to as a ‘standard’, as well as with other alternative systems. Screening procedures specially designed to be used for comparison with a standard have been proposed to find a subset that includes all the systems better than the standard in terms of the expected performance. Selection procedures are derived to determine the best system among a number of systems that are better than the standard, or to select the standard when it is equal to or better than the other alternatives. We develop new procedures for screening and selection through the use of two variance reduction techniques, common random numbers and control variates, which are particularly useful in the context of simulation experiments. Empirical results and a realistic example are also provided to compare our procedures with the existing ones.  相似文献   

5.
Restricted maximum likelihood (REML) methods are traditionally used for analyzing mixed models. Based on a multivariate normal likelihood, these analyses are sensitive to outliers. Recently developed robust rank-based procedures offer a complete analysis of mixed model: estimation of fixed effects, standard errors, and estimation of variance components. The results of a large Monte Carlo study are presented, comparing these two analyses for many situations over multivariate normal and contaminated normal distributions. The rank-based analyses are much more powerful and efficient than the REML analyses over all non-normal situations, while losing little power for normal errors.  相似文献   

6.
The effect of a test compound on neurogenically induced vasodilation in marmosets was studied using a non‐standard experimental design with overlapping dosage groups and repeated measurements. In this study, the assumption that the data were normally distributed seemed inappropriate, so no traditional data analyses could be used. As an alternative, a new permutation trend test was designed based on the Jonckheere–Terpstra test statistic. This test protects the type I error without any further assumptions. Statistically significant differences in trend between treatment groups were detected. The effect of the compound was then shown across doses using subsequent Wilcoxon rank‐sum tests against ordered alternatives. In all, the permutation test proved quite useful in this context. This nonparametric approach to the analysis may easily be adapted to other applications. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

7.
The AUC (area under ROC curve) is a commonly used metric to assess discrimination of risk prediction rules; however, standard errors of AUC are usually based on the Mann–Whitney U test that assumes independence of sampling units. For ophthalmologic applications, it is desirable to assess risk prediction rules based on eye-specific outcome variables which are generally highly, but not perfectly correlated in fellow eyes [e.g. progression of individual eyes to age-related macular degeneration (AMD)]. In this article, we use the extended Mann–Whitney U test (Rosner and Glynn, Biometrics 65:188–197, 2009) for the case where subunits within a cluster may have different progression status and assess discrimination of different prediction rules in this setting. Both data analyses based on progression of AMD and simulation studies show reasonable accuracy of this extended Mann–Whitney U test to assess discrimination of eye-specific risk prediction rules.  相似文献   

8.
Model dependent and robust test statistics constructed using a generalized estimating equations extension of logistic regression applicable to the analysis of correlated binary outcome data are shown to have relatively simple algebraic expressions in stratified analyses where all variables are measured at the cluster level These expressions are used to demonstrate the close relationship to standard procedures which assume that subjects responses are independent, to prove that the asymptotic validity of model dependent test statistics is assured if the average correlation between cluster members is constant, and that this assumption can be relaxed when there are the same number of subjects in each cluster.  相似文献   

9.
This study proposes a simple way to perform a power analysis of Mantel's test applied to squared Euclidean distance matrices. The general statistical aspects of the simple Mantel's test are reviewed. The Monte Carlo method is used to generate bivariate Gaussian variables in order to create squared Euclidean distance matrices. The power of the parametric correlation t-test applied to raw data is also evaluated and compared with that of Mantel's test. The standard procedure for calculating punctual power levels is used for validation. The proposed procedure allows one to draw the power curve by running the test only once, dispensing with the time demanding standard procedure of Monte Carlo simulations. Unlike the standard procedure, it does not depend on a knowledge of the distribution of the raw data. The simulated power function has all the properties of the power analysis theory and is in agreement with the results of the standard procedure.  相似文献   

10.
Health technology assessment often requires the evaluation of interventions which are implemented at the level of the health service organization unit (e.g. GP practice) for clusters of individuals. In a cluster randomized controlled trial (cRCT), clusters of patients are randomized; not each patient individually.

The majority of statistical analyses, in individually RCT, assume that the outcomes on different patients are independent. In cRCTs there is doubt about the validity of this assumption as the outcomes of patients, in the same cluster, may be correlated. Hence, the analysis of data from cRCTs presents a number of difficulties. The aim of this paper is to describe the statistical methods of adjusting for clustering, in the context of cRCTs.

There are essentially four approaches to analysing cRCTs: 1. Cluster-level analysis using aggregate summary data.

2. Regression analysis with robust standard errors.

3. Random-effects/cluster-specific approach.

4. Marginal/population-averaged approach.

This paper will compare and contrast the four approaches, using example data, with binary and continuous outcomes, from a cRCT designed to evaluate the effectiveness of training Health Visitors in psychological approaches to identify post-natal depressive symptoms and support post-natal women compared with usual care. The PoNDER Trial randomized 101 clusters (GP practices) and collected data on 2659 new mothers with an 18-month follow-up.  相似文献   

11.
The Monitoring Avian Productivity and Survivorship (MAPS) programme is a cooperative effort to provide annual regional indices of adult population size and post-fledging productivity and estimates of adult survival rates from data pooled from a network of constant-effort mist-netting stations across North America. This paper provides an overview of the field and analytical methods currently employed by MAPS, a discussion of the assumptions underlying the use of these techniques, and a discussion of the validity of some of these assumptions based on data gathered during the first 5 years (1989-1993) of the programme, during which time it grew from 17 to 227 stations. Ageand species-specific differences in dispersal characteristics are important factors affecting the usefulness of the indices of adult population size and productivity derived from MAPS data. The presence of transients, heterogeneous capture probabilities among stations, and the large sample sizes required by models to deal effectively with these two considerations are important factors affecting the accuracy and precision of survival rate estimates derived from MAPS data. Important results from the first 5 years of MAPS are: (1) indices of adult population size derived from MAPS mist-netting data correlated well with analogous indices derived from point-count data collected at MAPS stations; (2) annual changes in productivity indices generated by MAPS were similar to analogous changes documented by direct nest monitoring and were generally as expected when compared to annual changes in weather during the breeding season; and (3) a model using between-year recaptures in Cormack-Jolly-Seber (CJS) mark-recapture analyses to estimate the proportion of residents among unmarked birds was found, for most tropical-wintering species sampled, to provide a better fit with the available data and more realistic and precise estimates of annual survival rates of resident birds than did standard CJS mark-recapture analyses. A detailed review of the statistical characteristics of MAPS data and a thorough evaluation of the field and analytical methods used in the MAPS programme are currently under way.  相似文献   

12.
Summary.  Systematic review and synthesis (meta-analysis) methods are now increasingly used in many areas of health care research. We investigate the potential usefulness of these methods for combining human and animal data in human health risk assessment of exposure to environmental chemicals. Currently, risk assessments are often based on narrative review and expert judgment, but systematic review and formal synthesis methods offer a more transparent and rigorous approach. The method is illustrated by using the example of trihalomethane exposure and its possible association with low birth weight. A systematic literature review identified 13 relevant studies (five epidemiological and eight toxicological). Study-specific dose–response slope estimates were obtained for each of the studies and synthesized by using Bayesian meta-analysis models. Sensitivity analyses of the results obtained to the assumptions made suggest that some assumptions are critical. It is concluded that systematic review methods should be used in the synthesis of evidence for environmental standard setting, that meta-analysis will often be a valuable approach in these contexts and that sensitivity analyses are an important component of the approach whether or not formal synthesis methods (such as systematic review and meta-analysis) are used.  相似文献   

13.
The bootstrap is a methodology for estimating standard errors. The idea is to use a Monte Carlo simulation experiment based on a nonparametric estimate of the error distribution. The main objective of this article is to demonstrate the use of the bootstrap to attach standard errors to coefficient estimates in a second-order autoregressive model fitted by least squares and maximum likelihood estimation. Additionally, a comparison of the bootstrap and the conventional methodology is made. As it turns out, the conventional asymptotic formulae (both the least squares and maximum likelihood estimates) for estimating standard errors appear to overestimate the true standard errors. But there are two problems:i. The first two observations y1 and y2 have been fixed, and ii. The residuals have not been inflated. After these two factors are considered in the trial and bootstrap experiment, both the conventional maximum likelihood and bootstrap estimates of the standard errors appear to be performing quite well.  相似文献   

14.
Competing risk methods are time‐to‐event analyses that account for fatal and/or nonfatal events that may potentially alter or prevent a subject from experiencing the primary endpoint. Competing risk methods may provide a more accurate and less biased estimate of the incidence of an outcome but are rarely applied in cardiology trials. APEX investigated the efficacy of extended‐duration betrixaban versus standard‐duration enoxaparin to prevent a composite of symptomatic deep‐vein thrombosis (proximal or distal), nonfatal pulmonary embolism, or venous thromboembolism (VTE)–related death in acute medically ill patients (n = 7513). The aim of the current analysis was to determine the efficacy of betrixaban vs standard‐duration enoxaparin accounting for non‐VTE–related deaths using the Fine and Gray method for competing risks. The proportion of non‐VTE–related death was similar in both the betrixaban (133, 3.6%) and enoxaparin (136, 3.7%) arms, P = .85. Both the traditional Kaplan‐Meier method and the Fine and Gray method accounting for non‐VTE–related death as a competing risk showed equal reduction of VTE events when comparing betrixaban to enoxaparin (HR/SHR = 0.65, 95% 0.42‐0.99, P = 0.046). Due to the similar proportion of non‐VTE–related deaths in both treatment arms and the use of a univariate model, the Fine and Gray method provided identical results to the traditional Cox model. Using the Fine and Gray method in addition to the traditional Cox proportional hazards method can indicate whether the presence of a competing risk, which is dependent of the outcome, altered the risk estimate.  相似文献   

15.
张善余 《统计研究》2002,3(10):9-12
一、问题的提出与许多社会经济现象一样 ,城镇人口和乡村人口的差别虽然在定性上比较容易理解 ,也易于取得共识 ,但却难以在两大范畴之间定量地划出一条清晰的界线。原因就在于它们彼此之间始终存在着大量的渗透、过渡和转移 ,而且它们又都属于历史范畴 ,处在经常的发展、运动之中 ,还受到自然和人文环境的多种影响。联合国人口专家曾指出 :“各国‘城镇’所在地的定义各不相同 ,在同一个国家里 ,也因时代不同而有差别。此外 ,在某些国家 ,还有两个或更多的定义同时存在。城市化既是数量上的 ,也是质量上的一种进程 ,因而随着时间的推移 ,就…  相似文献   

16.
This article provides some views on the statistical design and analysis of weather modification experiments. Perspectives were developed from experience with analyses of the Santa Barbara Phase I experiment summarized in Section 2, Randomization analvses are reported and compared with previously published parametric analyses. The parametric significance levels of tests for a cloud seeding effect agree well with the significance levels of the new corresponding randomization tests, These results, along with similar results of others, suggest that parametric analyses may be used as approximations to randomization analyses in exploratory analyses or reanalyses of weather modification experimental data.  相似文献   

17.
Consider a random sample X1, X2,…, Xn, from a normal population with unknown mean and standard deviation. Only the sample size, mean and range are recorded and it is necessary to estimate the unknown population mean and standard deviation. In this paper the estimation of the mean and standard deviation is made from a Bayesian perspective by using a Markov Chain Monte Carlo (MCMC) algorithm to simulate samples from the intractable joint posterior distribution of the mean and standard deviation. The proposed methodology is applied to simulated and real data. The real data refers to the sugar content (oBRIX level) of orange juice produced in different countries.  相似文献   

18.
Assessing dose response from flexible‐dose clinical trials is problematic. The true dose effect may be obscured and even reversed in observed data because dose is related to both previous and subsequent outcomes. To remove selection bias, we propose marginal structural models, inverse probability of treatment‐weighting (IPTW) methodology. Potential clinical outcomes are compared across dose groups using a marginal structural model (MSM) based on a weighted pooled repeated measures analysis (generalized estimating equations with robust estimates of standard errors), with dose effect represented by current dose and recent dose history, and weights estimated from the data (via logistic regression) and determined as products of (i) inverse probability of receiving dose assignments that were actually received and (ii) inverse probability of remaining on treatment by this time. In simulations, this method led to almost unbiased estimates of true dose effect under various scenarios. Results were compared with those obtained by unweighted analyses and by weighted analyses under various model specifications. The simulation showed that the IPTW MSM methodology is highly sensitive to model misspecification even when weights are known. Practitioners applying MSM should be cautious about the challenges of implementing MSM with real clinical data. Clinical trial data are used to illustrate the methodology. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

19.
The multinomial selection problem is considered under the formulation of comparison with a standard, where each system is required to be compared to a single system, referred to as a “standard,” as well as to other alternative systems. The goal is to identify systems that are better than the standard, or to retain the standard when it is equal to or better than the other alternatives in terms of the probability to generate the largest or smallest performance measure. We derive new multinomial selection procedures for comparison with a standard to be applied in different scenarios, including exact small-sample procedure and approximate large-sample procedure. Empirical results and the proof are presented to demonstrate the statistical validity of our procedures. The tables of the procedure parameters and the corresponding exact probability of correct selection are also provided.  相似文献   

20.
k normal populations having common variance are used to construct two-sided and one-sided simultaneous prediction intervals for the differences between the future means of independent random sample from each of these populations compared to a standard. These prediction intervals are particularly useful if one has sampled the performance of several products and wishes to simultaneously predict the differences between future sample mean performance of these products and a standard with a predetermined joint probability. Methods on sample size determination are also given. The procedures are illustrated with a numerical example. Received: February 25, 2000; revised version: February 6, 2001  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号