首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到10条相似文献,搜索用时 15 毫秒
1.
Process monitoring in the presence of data correlation is one of the most discussed issues in statistical process control literature over the past decade. However, the attention to retrospective analysis in the presence of data correlation with various common cause sigma estimators is lacking in the literature. Maragah et al. (1992), in an early paper on the retrospective analysis in presence of data correlation, addresses only a single common cause sigma estimator. This paper studies the effect of data correlation on retrospective X-chart with various common cause sigma estimates in stable period of AR(1) Process. This study is carried out with the aim of identifying suitable standard deviation statistic/statistics which is/are robust to the data correlation. This paper also discusses the robustness of common cause sigma estimates for monitoring the data following other time series models, namely ARMA(1,1) and AR(p). Further, the bias characteristics of robust standard deviation estimates have been discussed for the above time-series models. This paper further studies the performance of retrospective X-chart on forecast residuals from various forecasting methods of AR(1) process. The above studies were carried out through simulating the stable period of AR(1), AR(2), stable and invertible period of ARMA(1,1) processes. The average number of false alarms have been considered as a measure of performance. The results of simulation studies have been discussed.  相似文献   

2.
Standard serial correlation tests are derived assuming that the disturbances are homoscedastic, but this study shows that asympotic critical values are not accurate when this assumption is violated. Asymptotic critical values for the ARCH(2)-corrected LM, BP and BL tests are valid only when the underlying ARCH process is strictly stationary, whereas Wooldridge's robust LM test has good properties overall. These tests exhibit similar bahaviour even when the underlying process is GARCH (1,1). When the regressors include lagged dependent variables, the rejection frequencies under both the null and alternative hypotheses depend on the coefficientsof the lagged dependent variables and the other model parameters. They appear to be robust across various disturbance distributions under the null hypothesis.  相似文献   

3.
Standard serial correlation tests are derived assuming that the disturbances are homoscedastic, but this study shows that asympotic critical values are not accurate when this assumption is violated. Asymptotic critical values for the ARCH(2)-corrected LM, BP and BL tests are valid only when the underlying ARCH process is strictly stationary, whereas Wooldridge's robust LM test has good properties overall. These tests exhibit similar bahaviour even when the underlying process is GARCH (1,1). When the regressors include lagged dependent variables, the rejection frequencies under both the null and alternative hypotheses depend on the coefficientsof the lagged dependent variables and the other model parameters. They appear to be robust across various disturbance distributions under the null hypothesis.  相似文献   

4.
5.
Compound optimal designs are considered where one component of the design criterion is a traditional optimality criterion such as the D‐optimality criterion, and the other component accounts for higher efficacy with low toxicity. With reference to the dose‐finding problem, we suggest the technique to choose weights for the two components that makes the optimization problem simpler than the traditional penalized design. We allow general bivariate responses for efficacy and toxicity. We then extend the procedure in the presence of nondesignable covariates such as age, sex, or other health conditions. A new breast cancer treatment is considered to illustrate the procedures. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

6.
7.
8.
Umbrella trials are an innovative trial design where different treatments are matched with subtypes of a disease, with the matching typically based on a set of biomarkers. Consequently, when patients can be positive for more than one biomarker, they may be eligible for multiple treatment arms. In practice, different approaches could be applied to allocate patients who are positive for multiple biomarkers to treatments. However, to date there has been little exploration of how these approaches compare statistically. We conduct a simulation study to compare five approaches to handling treatment allocation in the presence of multiple biomarkers – equal randomisation; randomisation with fixed probability of allocation to control; Bayesian adaptive randomisation (BAR); constrained randomisation; and hierarchy of biomarkers. We evaluate these approaches under different scenarios in the context of a hypothetical phase II biomarker-guided umbrella trial. We define the pairings representing the pre-trial expectations on efficacy as linked pairs, and the other biomarker-treatment pairings as unlinked. The hierarchy and BAR approaches have the highest power to detect a treatment-biomarker linked interaction. However, the hierarchy procedure performs poorly if the pre-specified treatment-biomarker pairings are incorrect. The BAR method allocates a higher proportion of patients who are positive for multiple biomarkers to promising treatments when an unlinked interaction is present. In most scenarios, the constrained randomisation approach best balances allocation to all treatment arms. Pre-specification of an approach to deal with treatment allocation in the presence of multiple biomarkers is important, especially when overlapping subgroups are likely.  相似文献   

9.
Elementary inductive proofs are presented for the binomial approximation to the hypergeometric distribution, the density of an order statistic, and the distribution of when X 1, ···, X n are a sample from N (μ, 1).  相似文献   

10.
Decision making is often supported by decision models. This study suggests that the negative impact of poor data quality (DQ) on decision making is often mediated by biased model estimation. To highlight this perspective, we develop an analytical framework that links three quality levels – data, model, and decision. The general framework is first developed at a high-level, and then extended further toward understanding the effect of incomplete datasets on Linear Discriminant Analysis (LDA) classifiers. The interplay between the three quality levels is evaluated analytically – initially for a one-dimensional case, and then for multiple dimensions. The impact is then further analyzed through several simulative experiments with artificial and real-world datasets. The experiment results support the analytical development and reveal nearly-exponential decline in the decision error as the completeness level increases. To conclude, we discuss the framework and the empirical findings, elaborate on the implications of our model on the data quality management, and the use of data for decision-models estimation.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号