首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In a randomized controlled trial (RCT), it is possible to improve precision and power and reduce sample size by appropriately adjusting for baseline covariates. There are multiple statistical methods to adjust for prognostic baseline covariates, such as an ANCOVA method. In this paper, we propose a clustering-based stratification method for adjusting for the prognostic baseline covariates. Clusters (strata) are formed only based on prognostic baseline covariates, not outcome data nor treatment assignment. Therefore, the clustering procedure can be completed prior to the availability of outcome data. The treatment effect is estimated in each cluster, and the overall treatment effect is derived by combining all cluster-specific treatment effect estimates. The proposed implementation of the procedure is described. Simulations studies and an example are presented.  相似文献   

2.
Various methods to control the influence of a covariate on a response variable are compared. These methods are ANOVA with or without homogeneity of variances (HOV) of errors and Kruskal–Wallis (K–W) tests on (covariate-adjusted) residuals and analysis of covariance (ANCOVA). Covariate-adjusted residuals are obtained from the overall regression line fit to the entire data set ignoring the treatment levels or factors. It is demonstrated that the methods on covariate-adjusted residuals are only appropriate when the regression lines are parallel and covariate means are equal for all treatments. Empirical size and power performance of the methods are compared by extensive Monte Carlo simulations. We manipulated the conditions such as assumptions of normality and HOV, sample size, and clustering of the covariates. The parametric methods on residuals and ANCOVA exhibited similar size and power when error terms have symmetric distributions with variances having the same functional form for each treatment, and covariates have uniform distributions within the same interval for each treatment. In such cases, parametric tests have higher power compared to the K–W test on residuals. When error terms have asymmetric distributions or have variances that are heterogeneous with different functional forms for each treatment, the tests are liberal with K–W test having higher power than others. The methods on covariate-adjusted residuals are severely affected by the clustering of the covariates relative to the treatment factors when covariate means are very different for treatments. For data clusters, ANCOVA method exhibits the appropriate level. However, such a clustering might suggest dependence between the covariates and the treatment factors, so makes ANCOVA less reliable as well.  相似文献   

3.
Summary.  Following several recent inquiries in the UK into medical malpractice and failures to deliver appropriate standards of health care, there is pressure to introduce formal monitoring of performance outcomes routinely throughout the National Health Service. Statistical process control (SPC) charts have been widely used to monitor medical outcomes in a variety of contexts and have been specifically advocated for use in clinical governance. However, previous applications of SPC charts in medical monitoring have focused on surveillance of a single process over time. We consider some of the methodological and practical aspects that surround the routine surveillance of health outcomes and, in particular, we focus on two important methodological issues that arise when attempting to extend SPC charts to monitor outcomes at more than one unit simultaneously (where a unit could be, for example, a surgeon, general practitioner or hospital): the need to acknowledge the inevitable between-unit variation in 'acceptable' performance outcomes due to the net effect of many small unmeasured sources of variation (e.g. unmeasured case mix and data errors) and the problem of multiple testing over units as well as time. We address the former by using quasi-likelihood estimates of overdispersion, and the latter by using recently developed methods based on estimation of false discovery rates. We present an application of our approach to annual monitoring 'all-cause' mortality data between 1995 and 2000 from 169 National Health Service hospital trusts in England and Wales.  相似文献   

4.
This article is concerned with the effect of the methods for handling missing values in multivariate control charts. We discuss the complete case, mean substitution, regression, stochastic regression, and the expectation–maximization algorithm methods for handling missing values. Estimates of mean vector and variance–covariance matrix from the treated data set are used to build the multivariate exponentially weighted moving average (MEWMA) control chart. Based on a Monte Carlo simulation study, the performance of each of the five methods is investigated in terms of its ability to obtain the nominal in-control and out-of-control average run length (ARL). We consider three sample sizes, five levels of the percentage of missing values, and three types of variable numbers. Our simulation results show that imputation methods produce better performance than case deletion methods. The regression-based imputation methods have the best overall performance among all the competing methods.  相似文献   

5.
This study examines the statistical process control chart used to detect a parameter shift with Poisson integer-valued GARCH (INGARCH) models and zero-inflated Poisson INGARCH models. INGARCH models have a conditional mean structure similar to GARCH models and are well known to be appropriate to analyzing count data that feature overdispersion. Special attention is paid in this study to conditional and general likelihood ratio-based (CLR and GLR) CUSUM charts and the score function-based CUSUM (SFCUSUM) chart. The performance of each of the proposed methods is evaluated through a simulation study, by calculating their average run length. Our findings show that the proposed methods perform adequately, and that the CLR chart outperforms the GLR chart when there is an increased shift of parameters. Moreover, the use of the SFCUSUM chart in particular is found to lead to a lower false alarm rate than the use of the CLR chart.  相似文献   

6.
Monte Carlo methods are used to examine the small-sample properties of 11 test statistics that can be used for comparing several treatments with respect to their mortality experiences while adjusting for covariables. The test statistics are investigated from three distinct models: the parametric, semiparametric and rank analysis of covariance (Quade, 1967) models. Four tests (likelihood ratio, Wald, conditional and unconditional score tests) from each of the first two models and three tests (based on rank scores) from the last model are discussed. The empirical size and power of the tests are investigated under a proportional hazards model in three situations: (1) the baseline hazard is correctly assumed to be Exponential, (2) the baseline hazard is incorrectly assumed to be Exponential, and (3) a treatment-covariate interaction is omitted from the analysis.  相似文献   

7.
大学生陌生人信任度的研究对引导大学生形成正确的陌生人信任观有重要意义。采用文献分析、问卷调查以及统计分析法对大学生陌生人信任度结构与现状进行了实证研究。结果显示:1.大学生陌生人信任度由风险控制度、助人倾向度、人性乐观度、社会认可度、损失接受度五个因素构成。2.大学生陌生人信任度及各因素的现状均属中等水平,由高到低依次为助人倾向度、人性乐观度、风险控制度、损失接受度、社会认可度。3.男女大学生陌生人信任度得分无显著差异(p>0.05),而不同年级、专业、生源地以及是否独生的大学生在陌生人信任度总问卷或各维度上得分均存在显著差异(p<0.05)。针对大学生陌生人信任度现状,从社会诚信制度构建、教育体制改革以及大学生人格塑造三方面提出了相关对策。  相似文献   

8.
ABSTRACT The analysis of a set of data consisting of N short (≤20 observations each) multivariate time series, where the observations are irregularly spaced and where observations for the different components of each multivariate series are observed at different times, is discussed. With the increased use of automatic recording devices in many fields, data such as these, which are of course samples from smooth response curves, are becoming more common. In this application, which was a clinical trial comparing two cements for use in hip replacement surgery, the key to the analysis was in recognizing that the interest lay in the degree to which the five curves representing a patient's vital signs deviated from baseline (i.e., normal for that patient) during surgery. This enabled the statisticians to define appropriate response variables. The analysis included Rosseeuw's (1984) technique for the identification of multivariate outliers and logistic regressions to identify any effects on the process producing the outliers due to treatment or covariates.  相似文献   

9.
The mystery of the lost star: A statistical detective story   总被引:1,自引:0,他引:1  
In July 2005 the Healthcare Commission released its annual "star ratings" for English National Health Service (NHS) trusts1, in which acute or specialist hospitals, mental health services, ambulance services and primary care trusts were each given 0, 1, 2 or 3 stars. There was some surprise that the Cambridge University Hospitals NHS Foundation Trust (better known as Addenbrooke's Hospital) dropped from the 3 stars obtained in 2004 to 2 stars. David Spiegelhalter investigated .  相似文献   

10.
Recurrent events in clinical trials have typically been analysed using either a multiple time-to-event method or a direct approach based on the distribution of the number of events. An area of application for these methods is exacerbation data from respiratory clinical trials. The different approaches to the analysis and the issues involved are illustrated for a large trial (n = 1465) in chronic obstructive pulmonary disease (COPD). For exacerbation rates, clinical interest centres on a direct comparison of rates for each treatment which favours the distribution-based analysis, rather than a time-to-event approach. Poisson regression has often been employed and has recently been recommended as the appropriate method of analysis for COPD exacerbations but the key assumptions often appear unreasonable for this analysis. By contrast use of a negative binomial model which corresponds to assuming a separate Poisson parameter for each subject offers a more appealing approach. Non-parametric methods avoid some of the assumptions required by these models, but do not provide appropriate estimates of treatment effects because of the discrete and bounded nature of the data.  相似文献   

11.
Frequently in the analysis of survival data, survival times within the same group are correlated due to unobserved co-variates. One way these co-variates can be included in the model is as frailties. These frailty random block effects generate dependency between the survival times of the individuals which are conditionally independent given the frailty. Using a conditional proportional hazards model, in conjunction with the frailty, a whole new family of models is introduced. By considering a gamma frailty model, often the issue is to find an appropriate model for the baseline hazard function. In this paper a flexible baseline hazard model based on a correlated prior process is proposed and is compared with a standard Weibull model. Several model diagnostics methods are developed and model comparison is made using recently developed Bayesian model selection criteria. The above methodologies are applied to the McGilchrist and Aisbett (1991) kidney infection data and the analysis is performed using Markov Chain Monte Carlo methods. This revised version was published online in July 2006 with corrections to the Cover Date.  相似文献   

12.
In many two‐period, two‐treatment (2 × 2) crossover trials, for each subject, a continuous response of interest is measured before and after administration of the assigned treatment within each period. The resulting data are typically used to test a null hypothesis involving the true difference in treatment response means. We show that the power achieved by different statistical approaches is greatly influenced by (i) the ‘structure’ of the variance–covariance matrix of the vector of within‐subject responses and (ii) how the baseline (i.e., pre‐treatment) responses are accounted for in the analysis. For (ii), we compare different approaches including ignoring one or both period baselines, using a common change from baseline analysis (which we advise against), using functions of one or both baselines as period‐specific or period‐invariant covariates, and doing joint modeling of the post‐baseline and baseline responses with corresponding mean constraints for the latter. Based on theoretical arguments and simulation‐based type I error rate and power properties, we recommend an analysis of covariance approach that uses the within‐subject difference in treatment responses as the dependent variable and the corresponding difference in baseline responses as a covariate. Data from three clinical trials are used to illustrate the main points. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

13.
The sequential probability ratio test (SPRT) chart is a very effective tool for monitoring manufacturing processes. This paper proposes a rational SPRT chart to monitor both process mean and variance. This SPRT chart determines the sampling interval d based on the rational subgroup concept according to the process conditions and administrative considerations. Since the rational subgrouping is widely adopted in the design and implementation of control charts, the studies of the rational SPRT have a practical significance. The rational SPRT chart is designed optimally in order to minimize the index average extra quadratic loss for the best overall performance. A systematic performance study has also been conducted. From an overall viewpoint, the rational SPRT chart is more effective than the cumulative sum chart by more than 63%. Furthermore, this article provides a design table, which contains the optimal values of the parameters of the rational SPRT charts for different specifications. This will greatly facilitate the potential users to select an appropriate SPRT chart for their applications. The users can also justify the application of the rational SPRT chart according to the achievable enhancement in detection effectiveness.  相似文献   

14.
Five sampling schemes (SS) for price index construction – one cut-off sampling technique and four probability-proportional-to-size (pps) methods – are evaluated by comparing their performance on a homescan market research data set across 21 months for each of the 13 classification of individual consumption by purpose (COICOP) food groups. Classifications are derived for each of the food groups and the population index value is used as a reference to derive performance error measures, such as root mean squared error, bias and standard deviation for each food type. Repeated samples are taken for each of the pps schemes and the resulting performance error measures analysed using regression of three of the pps schemes to assess the overall effect of SS and COICOP group whilst controlling for sample size, month and population index value. Cut-off sampling appears to perform less well than pps methods and multistage pps seems to have no advantage over its single-stage counterpart. The jackknife resampling technique is also explored as a means of estimating the standard error of the index and compared with the actual results from repeated sampling.  相似文献   

15.
The purpose of this paper is to revisit the response surface technique ridge analysis within the context of the “trust region” problem in numerical analysis. It is found that these two approaches inherently solve the same problem. We introduce the computational difficulty, termed the “hard case”, which originates in the trust region methods, also exists in ridge analysis but has never been formally discussed in response surface methodology (RSM). The dual response global optimization algorithm (DRSALG) based on the trust region method is applied (with a certain modification) to solving the ridge analysis problem. Some numerical comparisons against a general-purpose nonlinear optimization algorithm are illustrated in terms of examples appearing in the literature  相似文献   

16.
In the past decades, the number of variables explaining observations in different practical applications increased gradually. This has led to heavy computational tasks, despite of widely using provisional variable selection methods in data processing. Therefore, more methodological techniques have appeared to reduce the number of explanatory variables without losing much of the information. In these techniques, two distinct approaches are apparent: ‘shrinkage regression’ and ‘sufficient dimension reduction’. Surprisingly, there has not been any communication or comparison between these two methodological categories, and it is not clear when each of these two approaches are appropriate. In this paper, we fill some of this gap by first reviewing each category in brief, paying special attention to the most commonly used methods in each category. We then compare commonly used methods from both categories based on their accuracy, computation time, and their ability to select effective variables. A simulation study on the performance of the methods in each category is generated as well. The selected methods are concurrently tested on two sets of real data which allows us to recommend conditions under which one approach is more appropriate to be applied to high-dimensional data.  相似文献   

17.
The current guidelines, ICH E14, for the evaluation of non-antiarrhythmic compounds require a 'thorough' QT study (TQT) conducted during clinical development (ICH Guidance for Industry E14, 2005). Owing to the regulatory choice of margin (10 ms), the TQT studies must be conducted to rigorous standards to ensure that variability is minimized. Some of the key sources of variation can be controlled by use of randomization, crossover design, standardization of electrocardiogram (ECG) recording conditions and collection of replicate ECGs at each time point. However, one of the key factors in these studies is the baseline measurement, which if not controlled and consistent across studies could lead to significant misinterpretation. In this article, we examine three types of baseline methods widely used in the TQT studies to derive a change from baseline in QTc (time-matched, time-averaged and pre-dose-averaged baseline). We discuss the impact of the baseline values on the guidance-recommended 'largest time-matched' analyses. Using simulation we have shown the impact of these baseline approaches on the type I error and power for both crossover and parallel group designs. In this article, we show that the power of study decreases as the number of time points tested in TQT study increases. A time-matched baseline method is recommended by several authors (Drug Saf. 2005; 28(2):115-125, Health Canada guidance document: guide for the analysis and review of QT/QTc interval data, 2006) due to the existence of the circadian rhythm in QT. However, the impact of the time-matched baseline method on statistical inference and sample size should be considered carefully during the design of TQT study. The time-averaged baseline had the highest power in comparison with other baseline approaches.  相似文献   

18.
In this study, a control chart is constructed to monitor multivariate Poisson count data, called the MP chart. The control limits of the MP chart are developed by an exact probability method based on the sum of defects or non conformities for each quality characteristic. Numerical examples are used to illustrate the MP chart. The MP chart is evaluated by the average run length (ARL) in simulation. The result indicates that the MP chart is more appropriate than the Shewhart-type control chart when the correlation between variables exists.  相似文献   

19.
The authors consider Bayesian methods for fitting three semiparametric survival models, incorporating time‐dependent covariates that are step functions. In particular, these are models due to Cox [Cox ( 1972 ) Journal of the Royal Statistical Society, Series B, 34, 187–208], Prentice & Kalbfleisch and Cox & Oakes [Cox & Oakes ( 1984 ) Analysis of Survival Data, Chapman and Hall, London]. The model due to Prentice & Kalbfleisch [Prentice & Kalbfleisch ( 1979 ) Biometrics, 35, 25–39], which has seen very limited use, is given particular consideration. The prior for the baseline distribution in each model is taken to be a mixture of Polya trees and posterior inference is obtained through standard Markov chain Monte Carlo methods. They demonstrate the implementation and comparison of these three models on the celebrated Stanford heart transplant data and the study of the timing of cerebral edema diagnosis during emergency room treatment of diabetic ketoacidosis in children. An important feature of their overall discussion is the comparison of semi‐parametric families, and ultimate criterion based selection of a family within the context of a given data set. The Canadian Journal of Statistics 37: 60–79; © 2009 Statistical Society of Canada  相似文献   

20.
We discuss the impact of tuning parameter selection uncertainty in the context of shrinkage estimation and propose a methodology to account for problems arising from this issue: Transferring established concepts from model averaging to shrinkage estimation yields the concept of shrinkage averaging estimation (SAE) which reflects the idea of using weighted combinations of shrinkage estimators with different tuning parameters to improve overall stability, predictive performance and standard errors of shrinkage estimators. Two distinct approaches for an appropriate weight choice, both of which are inspired by concepts from the recent literature of model averaging, are presented: The first approach relates to an optimal weight choice with regard to the predictive performance of the final weighted estimator and its implementation can be realized via quadratic programming. The second approach has a fairly different motivation and considers the construction of weights via a resampling experiment. Focusing on Ridge, Lasso and Random Lasso estimators, the properties of the proposed shrinkage averaging estimators resulting from these strategies are explored by means of Monte-Carlo studies and are compared to traditional approaches where the tuning parameter is simply selected via cross validation criteria. The results show that the proposed SAE methodology can improve an estimators’ overall performance and reveal and incorporate tuning parameter uncertainty. As an illustration, selected methods are applied to some recent data from a study on leadership behavior in life science companies.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号