首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 922 毫秒
1.
For comparing treatments in clinical trials, Atkinson (1982) introduced optimal biased coins for balancing patients across treatment assignments by using D-optimality under the assumption of homoscedastic responses of different treatments. However, this assumption can be violated in many real applications. In this paper, we relax the homoscedasticity assumption in the k treatments setting with k>2. A general family of optimal response adaptive biased coin designs are proposed following Atkinson's procedure. Asymptotic properties of the proposed designs are obtained. Some advantages of the proposed design are discussed.  相似文献   

2.
This article focuses on designs involving two distinct groups of factors. In particular, we assume that between-group interactions are more important than within-group interactions. Under this assumption, a new word-length pattern is proposed to characterize the aliasing severity of a design, and the concepts of resolution and aberration are defined accordingly. Furthermore, we have obtained various bounds on the maximum number of factors that a design with given resolution can accommodate.  相似文献   

3.
In stratified otolaryngologic (or ophthalmologic) studies, the misleading results may be obtained when ignoring the confounding effect and the correlation between responses from two ears. Score statistic and Wald-type statistic are presented to test equality in a stratified bilateral-sample design, and their corresponding sample size formulae are given. Score statistic for testing homogeneity of difference between two proportions and score confidence interval of the common difference of two proportions in a stratified bilateral-sample design are derived. Empirical results show that (1) score statistic and Wald-type statistic based on dependence model assumption outperform other statistics in terms of the type I error rates; (2) score confidence interval demonstrates reasonably good coverage property; (3) sample size formula via Wald-type statistic under dependence model assumption is rather accurate. A real example is used to illustrate the proposed methodologies.  相似文献   

4.
Noninferiority testing in clinical trials is commonly understood in a Neyman-Pearson framework, and has been discussed in a Bayesian framework as well. In this paper, we discuss noninferiority testing in a Fisherian framework, in which the only assumption necessary for inference is the assumption of randomization of treatments to study subjects. Randomization plays an important role in not only the design but also the analysis of clinical trials, no matter the underlying inferential field. The ability to utilize permutation tests depends on assumptions around exchangeability, and we discuss the possible uses of permutation tests in active control noninferiority analyses. The other practical implications of this paper are admittedly minor but lead to better understanding of the historical and philosophical development of active control noninferiority testing. The conclusion may also frame discussion of other complicated issues in noninferiority testing, such as the role of an intention to treat analysis.  相似文献   

5.
A RANDOMIZED LONGITUDINAL PLAY-THE-WINNER DESIGN FOR REPEATED BINARY DATA   总被引:1,自引:0,他引:1  
In some clinical trials with two treatment arms, the patients enter the study at different times and are then allocated to one of two treatment groups. It is important for ethical reasons that there is greater probability of allocating a patient to the group that has displayed more favourable responses up to the patient's entry time. There are many adaptive designs in the literature to meet this ethical constraint, but most have a single binary response. Often the binary response is longitudinal in nature, being observed repeatedly over different monitoring times. This paper develops a randomized longitudinal play‐the‐winner design for such binary responses which meets the ethical constraint. Some performance characteristics of this design have been studied. It has been implemented in a trial of pulsed electro‐magnetic field therapy with rheumatoid arthritis patients.  相似文献   

6.
Papers dealing with measures of predictive power in survival analysis have seen their independence of censoring, or their estimates being unbiased under censoring, as the most important property. We argue that this property has been wrongly understood. Discussing the so-called measure of information gain, we point out that we cannot have unbiased estimates if all values, greater than a given time τ, are censored. This is due to the fact that censoring before τ has a different effect than censoring after τ. Such τ is often introduced by design of a study. Independence can only be achieved under the assumption of the model being valid after τ, which is impossible to verify. But if one is willing to make such an assumption, we suggest using multiple imputation to obtain a consistent estimate. We further show that censoring has different effects on the estimation of the measure for the Cox model than for parametric models, and we discuss them separately. We also give some warnings about the usage of the measure, especially when it comes to comparing essentially different models.  相似文献   

7.
Repeated confidence interval (RCI) is an important tool for design and monitoring of group sequential trials according to which we do not need to stop the trial with planned statistical stopping rules. In this article, we derive RCIs when data from each stage of the trial are not independent thus it is no longer a Brownian motion (BM) process. Under this assumption, a larger class of stochastic processes fractional Brownian motion (FBM) is considered. Comparisons of RCI width and sample size requirement are made to those under Brownian motion for different analysis times, Type I error rates and number of interim analysis. Power family spending functions including Pocock, O'Brien-Fleming design types are considered for these simulations. Interim data from BHAT and oncology trials is used to illustrate how to derive RCIs under FBM for efficacy and futility monitoring.  相似文献   

8.
ABSTRACT

The randomized response technique is an effective survey method designed to elicit sensitive information while ensuring the privacy of the respondents. In this article, we present some new results on the randomization response model in situations wherein one or two response variables are assumed to follow a multinomial distribution. For a single sensitive question, we use the well-known Hopkins randomization device to derive estimates, both under the assumption of truthful and untruthful responses, and present a technique for making pairwise comparisons. When there are two sensitive questions of interest, we derive a Pearson product moment correlation estimator based on the multinomial model assumption. This estimator may be used to quantify the linear relationship between two variables when multinomial response data are observed according to a randomized-response protocol.  相似文献   

9.
To increase the efficiency of comparisons between treatments in clinical trials, we may consider the use of a multiple matching design, in which, for each patient receiving the experimental treatment, we match with more than one patient receiving the standard treatment. To assess the efficacy of the experimental treatment, the risk ratio (RR) of patient responses between two treatments is certainly one of the most commonly used measures. Because the probability of patient responses in clinical trial is often not small, the odds ratio (OR), of which the practical interpretation is not easily understood, cannot approximate RR well. Thus, all sample size formulae in terms of OR for case-control studies with multiple matched controls per case can be of limited use here. In this paper, we develop three sample size formulae based on RR for randomized trials with multiple matching. We propose a test statistic for testing the equality of RR under multiple matching. On the basis of Monte Carlo simulation, we evaluate the performance of the proposed test statistic with respect to Type I error. To evaluate the accuracy and usefulness of the three sample size formulae developed in this paper, we further calculate their simulated powers and compare them with those of the sample size formula ignoring matching and the sample size formula based on OR for multiple matching published elsewhere. Finally, we include an example that employs the multiple matching study design about the use of the supplemental ascorbate in the supportive treatment of terminal cancer patients to illustrate the use of these formulae.  相似文献   

10.
11.
Response-adaptive designs in clinical trials incorporate information from prior patient responses in order to assign better performing treatments to the future patients of a clinical study. An example of a response adaptive design that has received much attention in recent years is the randomized play the winner design (RPWD). Beran [1977. Minimum Hellinger distance estimates for parametric models. Ann. Statist. 5, 445–463] investigated the problem of minimum Hellinger distance procedure (MHDP) for continuous data and showed that minimum Hellinger distance estimator (MHDE) of a finite dimensional parameter is as efficient as the MLE (maximum likelihood estimator) under a true model assumption. This paper develops minimum Hellinger distance methodology for data generated using RPWD. A new algorithm using the Monte Carlo approximation to the estimating equation is proposed. Consistency and asymptotic normality of the estimators are established and the robustness and small sample performance of the estimators are illustrated using simulations. The methodology when applied to the clinical trial data conducted by Eli-Lilly and Company, brings out the treatment effect in one of the strata using the frequentist techniques compared to the Bayesian argument of Tamura et al [1994. A case study of an adaptive clinical trialin the treatment of out-patients with depressive disorder. J. Amer. Statist. Assoc. 89, 768–776].  相似文献   

12.
Linear mixed models (LMM) are frequently used to analyze repeated measures data, because they are more flexible to modelling the correlation within-subject, often present in this type of data. The most popular LMM for continuous responses assumes that both the random effects and the within-subjects errors are normally distributed, which can be an unrealistic assumption, obscuring important features of the variations present within and among the units (or groups). This work presents skew-normal liner mixed models (SNLMM) that relax the normality assumption by using a multivariate skew-normal distribution, which includes the normal ones as a special case and provides robust estimation in mixed models. The MCMC scheme is derived and the results of a simulation study are provided demonstrating that standard information criteria may be used to detect departures from normality. The procedures are illustrated using a real data set from a cholesterol study.  相似文献   

13.
Missing data and, more generally, imperfections in implementing a study design are an endemic problem in large scale studies involving human subjects. We present an analysis of an experiment in the interaction between general practitioners and their patients, in which the issue of missing data is addressed by a sensitivity analysis using multiple imputation. Instead of specifying a model for missingness we explore certain extreme ways of departing from the assumption of data missing at random and establish the largest extent of such departures which would still fail to supplant the evidence about the studied effect. An important advantage of the approach is that the algorithm intended for the complete data, to fit generalized linear models with random effects, is used without any alteration.  相似文献   

14.
Statistical methods of risk assessment for continuous variables   总被引:1,自引:0,他引:1  
Adverse health effects for continuous responses are not as easily defined as adverse health effects for binary responses. Kodell and West (1993) developed methods for defining adverse effects for continuous responses and the associated risk. Procedures were developed for finding point estimates and upper confidence limits for additional risk under the assumption of a normal distribution and quadratic mean response curve with equal variances at each dose level. In this paper, methods are developed for point estimates and upper confidence limits for additional risk at experimental doses when the equal variance assumption is relaxed. An interpolation procedure is discussed for obtaining information at doses other than the experimental doses. A small simulation study is presented to test the performance of the methods discussed.  相似文献   

15.
In this article, we suggest simple moment-based estimators to deal with unobserved heterogeneity in a special class of nonlinear regression models that includes as main particular cases exponential models for nonnegative responses and logit and complementary loglog models for fractional responses. The proposed estimators: (i) treat observed and omitted covariates in a similar manner; (ii) can deal with boundary outcomes; (iii) accommodate endogenous explanatory variables without requiring knowledge on the reduced form model, although such information may be easily incorporated in the estimation process; (iv) do not require distributional assumptions on the unobservables, a conditional mean assumption being enough for consistent estimation of the structural parameters; and (v) under the additional assumption that the dependence between observables and unobservables is restricted to the conditional mean, produce consistent estimators of partial effects conditional only on observables.  相似文献   

16.
The underlying assumption for the design of control charts is the measurements within a sample are independently distributed. However, there are many situations where the uncorrelation assumption may be unacceptable in practice. In this paper, the economic design of cumulative sum (CUSUM) control chart for correlated data within a sample is developed. The genetic algorithm is applied to find the optimal design parameters of the CUSUM control chart by minimizing the cost function. An illustrative example is given. A sensitivity analysis is then conducted to evaluate the effects of cost parameters, process parameters, and correlation coefficient on the economic design.  相似文献   

17.
In many toxicological assays, interactions between primary and secondary effects may cause a downturn in mean responses at high doses. In this situation, the typical monotonicity assumption is invalid and may be quite misleading. Prior literature addresses the analysis of response functions with a downturn, but so far as we know, this paper initiates the study of experimental design for this situation. A growth model is combined with a death model to allow for the downturn in mean doses. Several different objective functions are studied. When the number of treatments equals the number of parameters, Fisher information is found to be independent of the model of the treatment means and on the magnitudes of the treatments. In general, A- and DA-optimal weights for estimating adjacent mean differences are found analytically for a simple model and numerically for a biologically motivated model. Results on c-optimality are also obtained for estimating the peak dose and the EC50 (the treatment with response half way between the control and the peak response on the increasing portion of the response function). Finally, when interest lies only in the increasing portion of the response function, we propose composite D-optimal designs.  相似文献   

18.
This article proposes a new fractional age assumption (FAA) based on the cubic polynomial interpolation (CPI) and applies it to estimate the mortality rate and related actuarial quantities. The validity of the method under CPI is proved theoretically and the valuable advantages of CPI assumption are discussed based on three different perspectives—utilized death information, property of mortality force, and optimality criterion. The results show that CPI assumption has distinct valuable superiority compared with other FAAs in references. Finally under CPI assumption we study the calculations of some important actuarial quantities in life contingencies.  相似文献   

19.
In Computer Experiments (CE), a careful selection of the design points is essential for predicting the system response at untried points, based on the values observed at tried points. In physical experiments, the protocol is based on Design of Experiments, a methodology whose basic principles are questioned in CE. When the responses of a CE are modeled as jointly Gaussian random variables with their covariance depending on the distance between points, the use of the so called space-filling designs (random designs, stratified designs and Latin Hypercube designs) is a common choice, because it is expected that the nearer the untried point is to the design points, the better is the prediction. In this paper we focus on the class of Latin Hypercube (LH) designs. The behavior of various LH designs is examined according to the Gaussian assumption with exponential correlation, in order to minimize the total prediction error at the points of a regular lattice. In such a special case, the problem is reduced to an algebraic statistical model, which is solved using both symbolic algebraic software and statistical software. We provide closed-form computation of the variance of the Gaussian linear predictor as a function of the design, in order to make a comparison between LH designs. In principle, the method applies to any number of factors and any number of levels, and also to classes of designs other than LHs. In our current implementation, the applicability is limited by the high computational complexity of the algorithms involved.  相似文献   

20.
When comparing two experimental treatments with a placebo, we focus our attention on interval estimation of the proportion ratio (PR) of patient responses under a three-period crossover design. We propose a random effects exponential multiplicative risk model and derive asymptotic interval estimators in closed form for the PR between treatments and placebo. Using Monte Carlo simulations, we compare the performance of these interval estimators in a variety of situations. We use the data comparing two different doses of an analgesic with placebo for the relief of primary dysmenorrhea to illustrate the use of these interval estimators and the difference in estimates of the PR and odds ratio (OR) when the underlying relief rates are not small.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号