首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
A new allocation proportion is derived by using differential equation methods for response-adaptive designs. This new allocation is compared with the balanced and the Neyman allocations and the optimal allocation proposed by Rosenberger, Stallard, Ivanova, Harper and Ricks (RSIHR) from an ethical point of view and statistical power performance. The new allocation has the ethical advantages of allocating more than 50% of patients to the better treatment. It also allocates higher proportion of patients to the better treatment than the RSIHR optimal allocation for success probabilities larger than 0.5. The statistical power under the proposed allocation is compared with these under the balanced, the Neyman and Rosenberger's optimal allocations through simulation. The simulation results indicate that the statistical power under the proposed allocation proportion is similar as to those under the balanced, the Neyman and the RSIHR allocations.  相似文献   

2.
Various bootstrap methods for variance estimation and confidence intervals in complex survey data, where sampling is done without replacement, have been proposed in the literature. The oldest, and perhaps the most intuitively appealing, is the without-replacement bootstrap (BWO) method proposed by Gross (1980). Unfortunately, the BWO method is only applicable to very simple sampling situations. We first introduce extensions of the BWO method to more complex sampling designs. The performance of the BWO and two other bootstrap methods, the rescaling bootstrap (Rao and Wu 1988) and the mirror-match bootstrap (Sitter 1992), are then compared through a simulation study. Together these three methods encompass the various bootstrap proposals.  相似文献   

3.
The identification of active effects in supersaturated designs (SSDs) constitutes a problem of considerable interest to both scientists and engineers. The complicated structure of the design matrix renders the analysis of such designs a complicated issue. Although several methods have been proposed so far, a solution to the problem beyond one or two active factors seems to be inadequate. This article presents a heuristic approach for analyzing SSDs using the cumulative sum control chart (CUSUM) under a sure independence screening approach. Simulations are used to investigate the performance of the method comparing the proposed method with other well-known methods from the literature. The results establish the powerfulness of the proposed methodology.  相似文献   

4.
Despite tremendous effort on different designs with cross-sectional data, little research has been conducted for sample size calculation and power analyses under repeated measures design. In addition to time-averaged difference, changes in mean response over time (CIMROT) is the primary interest in repeated measures analysis. We generalized sample size calculation and power analysis equations for CIMROT to allow unequal sample size between groups for both continuous and binary measures, through simulation, evaluated the performance of proposed methods, and compared our approach to that of a two-stage model formulization. We also created a software procedure to implement the proposed methods.  相似文献   

5.
The analysis of unreplicated factorial designs concentrates much attention since there are no degrees of freedom left to estimate the error variance. In this article, we propose clustering the factorial estimates in two groups, one containing the active effects and one containing the inactive effects. The powerfulness of the proposed method is revealed via a comparative simulation study.  相似文献   

6.
Four approximate methods are proposed to construct confidence intervals for the estimation of variance components in unbalanced mixed models. The first three methods are modifications of the Wald, arithmetic and harmonic mean procedures, see Harville and Fenech (1985), while the fourth is an adaptive approach, combining the arithmetic and harmonic mean procedures. The performances of the proposed methods were assessed by a Monte Carlo simulation study. It was found that the intervals based on Wald's method maintained the nominal confidence levels across all designs and values of the parameters under study. On the other hand, the arithmetic (harmonic) mean method performed well for small (large) values of the variance component, relative to the error variance component. The adaptive procedure performed rather well except for extremely unbalanced designs. Further, compared with equal tails intervals, the intervals which use special tables, e.g., Table 678 of Tate and Klett (1959), provided adequate coverage while having much shorter lengths and are thus recommended for use in practice.  相似文献   

7.
Several researchers have proposed solutions to control type I error rate in sequential designs. The use of Bayesian sequential design becomes more common; however, these designs are subject to inflation of the type I error rate. We propose a Bayesian sequential design for binary outcome using an alpha‐spending function to control the overall type I error rate. Algorithms are presented for calculating critical values and power for the proposed designs. We also propose a new stopping rule for futility. Sensitivity analysis is implemented for assessing the effects of varying the parameters of the prior distribution and maximum total sample size on critical values. Alpha‐spending functions are compared using power and actual sample size through simulations. Further simulations show that, when total sample size is fixed, the proposed design has greater power than the traditional Bayesian sequential design, which sets equal stopping bounds at all interim analyses. We also find that the proposed design with the new stopping for futility rule results in greater power and can stop earlier with a smaller actual sample size, compared with the traditional stopping rule for futility when all other conditions are held constant. Finally, we apply the proposed method to a real data set and compare the results with traditional designs.  相似文献   

8.
Crossover designs have some advantages over standard clinical trial designs and they are often used in trials evaluating the efficacy of treatments for infertility. However, clinical trials of infertility treatments violate a fundamental condition of crossover designs, because women who become pregnant in the first treatment period are not treated in the second period. In previous research, to deal with this problem, some new designs, such as re‐randomization designs, and analysis methods including the logistic mixture model and the beta‐binomial mixture model were proposed. Although the performance of these designs and methods has previously been evaluated in large‐scale clinical trials with sample sizes of more than 1000 per group, the actual sample sizes of infertility treatment trials are usually around 100 per group. The most appropriate design and analysis for these moderate‐scale clinical trials are currently unclear. In this study, we conducted simulation studies to determine the appropriate design and analysis method of moderate‐scale clinical trials for irreversible endpoints by evaluating the statistical power and bias in the treatment effect estimates. The Mantel–Haenszel method had similar power and bias to the logistic mixture model. The crossover designs had the highest power and the smallest bias. We recommend using a combination of the crossover design and the Mantel–Haenszel method for two‐period, two‐treatment clinical trials with irreversible endpoints. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

9.
For the two-way fixed effects ANOVA, under assumption violations, the present study employs trimmed means and Hall's transformation to correct asymmetry, and an approximate test, such as the Alexander-Govern or Welch-James test, to correct heterogeneity. The unweighted as well as weighted means analyses of omnibus effects in unbalanced designs were considered. A simulated data set was presented and computer simulations were performed to investigate the small-sample properties of the methods. The simulation results show that the proposed technique is valid and powerful compared with the conventional methods.  相似文献   

10.
In analyzing data from unreplicated factorial designs, the half-normal probability plot is commonly used to screen for the ‘vital few’ effects. Recently, many formal methods have been proposed to overcome the subjectivity of this plot. Lawson (1998) (hereafter denoted as LGB) suggested a hybrid method based on the half-normal probability plot, which is a blend of Lenth (1989) and Loh (1992) method. The method consists of fitting a simple least squares line to the inliers, which are determined by the Lenth method. The effects exceeding the prediction limits based on the fitted line are candidates for the vital few effects. To improve the accuracy of partitioning the effects into inliers and outliers, we propose a modified LGB method (hereafter denoted as the Mod_LGB method), in which more outliers can be classified by using both the Carling’s modification of the box plot (Carling, 2000) and Lenth method. If no outlier exists or there is a wide range in the inliers as determined by the Lenth method, more outliers can be found by the Carling method. A simulation study is conducted in unreplicated 24 designs with the number of active effects ranging from 1 to 6 to compare the efficiency of the Lenth method, original LGB methods, and the proposed modified version of the LGB method.  相似文献   

11.
Supersaturated designs (SSDs) are useful in examining many factors with a restricted number of experimental units. Many analysis methods have been proposed to analyse data from SSDs, with some methods performing better than others when data are normally distributed. It is possible that data sets violate assumptions of standard analysis methods used to analyse data from SSDs, and to date the performance of these analysis methods have not been evaluated using nonnormally distributed data sets. We conducted a simulation study with normally and nonnormally distributed data sets to compare the identification rates, power and coverage of the true models using a permutation test, the stepwise procedure and the smoothly clipped absolute deviation (SCAD) method. Results showed that at the level of significance α=0.01, the identification rates of the true models of the three methods were comparable; however at α=0.05, both the permutation test and stepwise procedures had considerably lower identification rates than SCAD. For most cases, the three methods produced high power and coverage. The experimentwise error rates (EER) were close to the nominal level (11.36%) for the stepwise method, while they were somewhat higher for the permutation test. The EER for the SCAD method were extremely high (84–87%) for the normal and t-distributions, as well as for data with outlier.  相似文献   

12.
The analysis of designs based on saturated orthogonal arrays poses a very difficult challenge since there are no degrees of freedom left to estimate the error variance. In this paper we propose a heuristic approach for the use of cumulative sum control chart for screening active effects in orthogonal-saturated experiments. A comparative simulation study establishes the powerfulness of the proposed method.  相似文献   

13.
ABSTRACT

Supersaturated designs (SSDs) constitute a large class of fractional factorial designs which can be used for screening out the important factors from a large set of potentially active ones. A major advantage of these designs is that they reduce the experimental cost dramatically, but their crucial disadvantage is the confounding involved in the statistical analysis. Identification of active effects in SSDs has been the subject of much recent study. In this article we present a two-stage procedure for analyzing two-level SSDs assuming a main-effect only model, without including any interaction terms. This method combines sure independence screening (SIS) with different penalty functions; such as Smoothly Clipped Absolute Deviation (SCAD), Lasso and MC penalty achieving both the down-selection and the estimation of the significant effects, simultaneously. Insights on using the proposed methodology are provided through various simulation scenarios and several comparisons with existing approaches, such as stepwise in combination with SCAD and Dantzig Selector (DS) are presented as well. Results of the numerical study and real data analysis reveal that the proposed procedure can be considered as an advantageous tool due to its extremely good performance for identifying active factors.  相似文献   

14.
We propose an efficient group sequential monitoring rule for clinical trials. At each interim analysis both efficacy and futility are evaluated through a specified loss structure together with the predicted power. The proposed design is robust to a wide range of priors, and achieves the specified power with a saving of sample size compared to existing adaptive designs. A method is also proposed to obtain a reduced-bias estimator of treatment difference for the proposed design. The new approaches hold great potential for efficiently selecting a more effective treatment in comparative trials. Operating characteristics are evaluated and compared with other group sequential designs in empirical studies. An example is provided to illustrate the application of the method.  相似文献   

15.
The problem of testing for equivalence in clinical trials is restated here in terms of the proper clinical hypotheses and a simple classical frequentist significance test based on the central t distribution is derived. This method is then shown to be more powerful than the methods based on usual (shortest) and symmetric confidence intervals.

We begin by considering a noncentral t statistic and then consider three approximations to it. A simulation is used to compare actual test sizes to the nominal values in crossover and completely randomized designs. A central t approximation was the best. The power calculation is then shown to be based on a central t distribution, and a method is developed for obtaining the sample size required to obtain a specified power. For the approximations, a simulation compares actual powers to those obtained for the t distribution and confirms that the theoretical results are close to the actual powers.  相似文献   

16.
Supersaturated designs are factorial designs in which the number of potential effects is greater than the run size. They are commonly used in screening experiments, with the aim of identifying the dominant active factors with low cost. However, an important research field, which is poorly developed, is the analysis of such designs with non-normal response. In this article, we develop a variable selection strategy, through the modification of the PageRank algorithm, which is commonly used in the Google search engine for ranking Webpages. The proposed method incorporates an appropriate information theoretical measure into this algorithm and as a result, it can be efficiently used for factor screening. A noteworthy advantage of this procedure is that it allows the use of supersaturated designs for analyzing discrete data and therefore a generalized linear model is assumed. As it is depicted via a thorough simulation study, in which the Type I and Type II error rates are computed for a wide range of underlying models and designs, the presented approach can be considered quite advantageous and effective.  相似文献   

17.
Classes of distribution-free tests are proposed for testing homogeneity against order restricted as well as unrestricted alternatives in randomized block designs with multiple observations per cell. Allowing for different interblock scoring schemes, these tests are constructed based on the method of within block rankings. Asymptotic distributions (cell sizes tending to infinity) of these tests are derived under the assumption of homogeneity. The Pitman asymptotic relative efficiencies relative to the least squares statistics are studied. It is shown that when blocks are governed by different distributions, adaptive choice of scores within each block results in asymptotically more efficient tests as compared with methods that ignore such information. Monte Carlo simulations of selected designs indicate that the method of within block rankings is more power robust with respect to differing block distributions.  相似文献   

18.
Judging scholarly posters creates a challenge to assign the judges efficiently. If there are many posters and few reviews per judge, the commonly used balanced incomplete block design is not a feasible option. An additional challenge is an unknown number of judges before the event. We propose two connected near-balanced incomplete block designs that both satisfy the requirements of our setting: one that generates a connected assignment and balances the treatments and another one that further balances pairs of treatments. We describe both fixed and random effects models to estimate the population marginal means of the poster scores and rationalize the use of the random effects model. We evaluate the estimation accuracy and efficiency, especially the winning chance of the truly best posters, of the two designs in comparison with a random assignment via simulation studies. The two proposed designs both demonstrate accuracy and efficiency gain over the random assignment.  相似文献   

19.
Abstract

Nonregular designs are popular in planning industrial experiments for their run-size economy. These designs often produce partially aliased effects, where the effects of different factors cannot be completely separated from each other. In this article, we propose applying an adaptive lasso regression as an analytical tool for designs with complex aliasing. Its utility compared to traditional methods is demonstrated by analyzing real-life experimental data and simulation studies.  相似文献   

20.
In this paper, we focus on the problem of factor screening in nonregular two-level designs through gradually reducing the number of possible sets of active factors. We are particularly concerned with situations when three or four factors are active. Our proposed method works through examining fits of projection models, where variable selection techniques are used to reduce the number of terms. To examine the reliability of the methods in combination with such techniques, a panel of models consisting of three or four active factors with data generated from the 12-run and the 20-run Plackett–Burman (PB) design is used. The dependence of the procedure on the amount of noise, the number of active factors and the number of experimental factors is also investigated. For designs with few runs such as the 12-run PB design, variable selection should be done with care and default procedures in computer software may not be reliable to which we suggest improvements. A real example is included to show how we propose factor screening can be done in practice.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号