首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 265 毫秒
1.
In many experiments, not all explanatory variables can be controlled. When the units arise sequentially, different approaches may be used. The authors study a natural sequential procedure for “marginally restricted” D‐optimal designs. They assume that one set of explanatory variables (x1) is observed sequentially, and that the experimenter responds by choosing an appropriate value of the explanatory variable x2. In order to solve the sequential problem a priori, the authors consider the problem of constructing optimal designs with a prior marginal distribution for x1. This eliminates the influence of units already observed on the next unit to be designed. They give explicit designs for various cases in which the mean response follows a linear regression model; they also consider a case study with a nonlinear logistic response. They find that the optimal strategy often consists of randomizing the assignment of the values of x2.  相似文献   

2.
In a clinical trial, sometimes it is desirable to allocate as many patients as possible to the best treatment, in particular, when a trial for a rare disease may contain a considerable portion of the whole target population. The Gittins index rule is a powerful tool for sequentially allocating patients to the best treatment based on the responses of patients already treated. However, its application in clinical trials is limited due to technical complexity and lack of randomness. Thompson sampling is an appealing approach, since it makes a compromise between optimal treatment allocation and randomness with some desirable optimal properties in the machine learning context. However, in clinical trial settings, multiple simulation studies have shown disappointing results with Thompson samplers. We consider how to improve short-run performance of Thompson sampling and propose a novel acceleration approach. This approach can also be applied to situations when patients can only be allocated by batch and is very easy to implement without using complex algorithms. A simulation study showed that this approach could improve the performance of Thompson sampling in terms of average total response rate. An application to a redesign of a preference trial to maximize patient's satisfaction is also presented.  相似文献   

3.
This paper describes the author's research connecting the empirical analysis of treatment response with the normative analysis of treatment choice under ambiguity. Imagine a planner who must choose a treatment rule assigning a treatment to each member of a heterogeneous population of interest. The planner observes certain covariates for each person. Each member of the population has a response function mapping treatments into a real-valued outcome of interest. Suppose that the planner wants to choose a treatment rule that maximizes the population mean outcome. An optimal rule assigns to each member of the population a treatment that maximizes mean outcome conditional on the person's observed covariates. However, identification problems in the empirical analysis of treatment response commonly prevent planners from knowing the conditional mean outcomes associated with alternative treatments; hence planners commonly face problems of treatment choice under ambiguity. The research surveyed here characterizes this ambiguity in practical settings where the planner may be able to bound but not identify the relevant conditional mean outcomes. The statistical problem of treatment choice using finite-sample data is discussed as well.  相似文献   

4.
The procedure of steepest ascent consists of performing a sequence of sets of trials. Each set of trials is obtained as a result of proceeding sequentially along the path of maximum increase in response. Until now there has been no formal stopping rule, When response values are subject to random error, the decision to stop can be premature due to a “false” drop in the observed response.

A new stopping rule procedure for steepest ascent is intro-duced that takes into account the random error variation in response values. The new procedure protects against taking too many observations when the true mean response is decreasing, it also protects against stopping. prematurely when the true mean response is increasing, A numerical example is given which illus-trates the method.  相似文献   

5.
In this article, we consider the problem of sequentially estimating the mean of a Poisson distribution under LINEX (linear exponential) loss function and fixed cost per observation within a Bayesian framework. An asymptotically pointwise optimal rule with a prior distribution is proposed and shown to be asymptotically optimal for arbitrary priors. The proposed asymptotically pointwise optimal rule is illustrated using a real data set.  相似文献   

6.
In the case where non-experimental data are available from an industrial process and a directed graph for how various factors affect a response variable is known based on a substantive understanding of the process, we consider a problem in which a control plan involving multiple treatment variables is conducted in order to bring a response variable close to a target value with variation reduction. Using statistical causal analysis with linear (recursive and non-recursive) structural equation models, we configure an optimal control plan involving multiple treatment variables through causal parameters. Based on the formulation, we clarify the causal mechanism for how the variance of a response variable changes when the control plan is conducted. The results enable us to evaluate the effect of a control plan on the variance of a response variable from non-experimental data and provide a new application of linear structural equation models to engineering science.  相似文献   

7.
In many randomized clinical trials, the primary response variable, for example, the survival time, is not observed directly after the patients enroll in the study but rather observed after some period of time (lag time). It is often the case that such a response variable is missing for some patients due to censoring that occurs when the study ends before the patient’s response is observed or when the patients drop out of the study. It is often assumed that censoring occurs at random which is referred to as noninformative censoring; however, in many cases such an assumption may not be reasonable. If the missing data are not analyzed properly, the estimator or test for the treatment effect may be biased. In this paper, we use semiparametric theory to derive a class of consistent and asymptotically normal estimators for the treatment effect parameter which are applicable when the response variable is right censored. The baseline auxiliary covariates and post-treatment auxiliary covariates, which may be time-dependent, are also considered in our semiparametric model. These auxiliary covariates are used to derive estimators that both account for informative censoring and are more efficient then the estimators which do not consider the auxiliary covariates.  相似文献   

8.
We consider classification in the situation of two groups with normally distributed data in the ‘large p small n’ framework. To counterbalance the high number of variables, we consider the thresholded independence rule. An upper bound on the classification error is established that is taylored to a mean value of interest in biological applications.  相似文献   

9.
In the real problems, there are many cases which have correlated quality characteristics so multiple response optimization can be more realistic if we can consider correlation structure of responses. In this study we propose a new method which uses multivariate normal probability to find the optimal treatment in an experimental design. Moreover, a heuristic method is used to find better factors’ level in all possible combinations in the designs with large number of controllable factors and their levels. Some simulated numerical examples and a real case were studied by the proposed approach and the comparison of the results with previous methods show efficiency of the proposed method.  相似文献   

10.
In many practical situations, a statistical practitioner often faces a problem of classifying an object from one of the segmented (or screened) populations where the segmentation was conducted by a set of screening variables. This paper addresses this problem, proposing and studying yet another optimal rule for classification with segmented populations. A class of q-dimensional rectangle-screened elliptically contoured (RSEC) distributions is considered for flexibly modeling the segmented populations. Based on the properties of the RSEC distributions, a parametric procedure for the segmented classification analysis (SCA) is proposed. This includes motivation for the SCA as well as some theoretical propositions regarding its optimal rule and properties. These properties allow us to establish other important results which include an efficient estimation of the rule by the Monte Carlo expectation–conditional maximization algorithm and an optimal variable selection procedure. Two numerical examples making use of utilizing a simulation study and a real dataset application and advocating the SCA procedure are also provided.  相似文献   

11.
Whitening, or sphering, is a common preprocessing step in statistical analysis to transform random variables to orthogonality. However, due to rotational freedom there are infinitely many possible whitening procedures. Consequently, there is a diverse range of sphering methods in use, for example, based on principal component analysis (PCA), Cholesky matrix decomposition, and zero-phase component analysis (ZCA), among others. Here, we provide an overview of the underlying theory and discuss five natural whitening procedures. Subsequently, we demonstrate that investigating the cross-covariance and the cross-correlation matrix between sphered and original variables allows to break the rotational invariance and to identify optimal whitening transformations. As a result we recommend two particular approaches: ZCA-cor whitening to produce sphered variables that are maximally similar to the original variables, and PCA-cor whitening to obtain sphered variables that maximally compress the original variables.  相似文献   

12.
In recent years adaptive designs have been becoming popular in the context of clinical trials. The purpose of the present work is to provide a sequential two-treatment allocation rule for when the response variables are continuous. The rule is ethical as well as sometimes optimal depending upon the nature of the distribution of the study variables. We examine the various properties of the rule.  相似文献   

13.
In a response-adaptive design, we review and update the trial on the basis of outcomes in order to achieve a specific goal. Response-adaptive designs for clinical trials are usually constructed to achieve a single objective. In this paper, we develop a new adaptive allocation rule to improve current strategies for building response-adaptive designs to construct multiple-objective repeated measurement designs. This new rule is designed to increase estimation precision and treatment benefit by assigning more patients to a better treatment sequence. We demonstrate that designs constructed under the new proposed allocation rule can be nearly as efficient as fixed optimal designs in terms of the mean squared error, while leading to improved patient care.  相似文献   

14.
Heterogeneity is an enormously complex problem because there are so many dimensions and variables that can be considered when assessing which ones may influence an efficacy or safety outcome for an individual patient. This is difficult in randomized controlled trials and even more so in observational settings. An alternative approach is presented in which the individual patient becomes the “subgroup,” and similar patients are identified in the clinical trial database or electronic medical record that can be used to predict how that individual patient may respond to treatment.  相似文献   

15.
We consider response adaptive designs when the binary response may be misclassified and extend relevant results in the literature. We derive the optimal allocations under various objectives and examine the relationship between the power of statistical test and the variability of treatment allocation. Asymptotically best response adaptive randomization procedures and effects of misclassification on the optimal allocations are investigated. A real-life clinical trial is also discussed to illustrate our proposed approach.  相似文献   

16.
Summary.  In longitudinal studies missing data are the rule not the exception. We consider the analysis of longitudinal binary data with non-monotone missingness that is thought to be non-ignorable. In this setting a full likelihood approach is complicated algebraically and can be computationally prohibitive when there are many measurement occasions. We propose a 'protective' estimator that assumes that the probability that a response is missing at any occasion depends, in a completely unspecified way, on the value of that variable alone. Relying on this 'protectiveness' assumption, we describe a pseudolikelihood estimator of the regression parameters under non-ignorable missingness, without having to model the missing data mechanism directly. The method proposed is applied to CD4 cell count data from two longitudinal clinical trials of patients infected with the human immunodeficiency virus.  相似文献   

17.
Chia-Chen Yang 《Statistics》2015,49(3):549-563
In this paper, the problem of sequentially estimating the mean of the exponential distribution with relative linear exponential loss and fixed cost for each observation is considered within the Bayesian framework. An optimal procedure with a deterministic stopping rule is derived. Since the corresponding value of the optimal deterministic stopping rule cannot be obtained directly, an approximate optimal deterministic stopping rule and an asymptotically pointwise optimal rule are proposed. In addition, we propose a robust procedure with a deterministic stopping rule, which does not depend on the parameters of the prior distribution. All of the proposed procedures are shown to be asymptotically optimal. Some numerical studies are conducted to investigate the performances of the proposed procedures. A real data set is provided to illustrate the use of the proposed procedures.  相似文献   

18.
In this article, we consider the problem of testing the hypothesis on mean vectors in multiple-sample problem when the number of observations is smaller than the number of variables. First we propose an independence rule test (IRT) to deal with high-dimensional effects. The asymptotic distributions of IRT under the null hypothesis as well as under the alternative are established when both the dimension and the sample size go to infinity. Next, using the derived asymptotic power of IRT, we propose an adaptive independence rule test (AIRT) that is particularly designed for testing against sparse alternatives. Our AIRT is novel in that it can effectively pick out a few relevant features and reduce the effect of noise accumulation. Real data analysis and Monte Carlo simulations are used to illustrate our proposed methods.  相似文献   

19.
Optimal design theory deals with the assessment of the optimal joint distribution of all independent variables prior to data collection. In many practical situations, however, covariates are involved for which the distribution is not previously determined. The optimal design problem may then be reformulated in terms of finding the optimal marginal distribution for a specific set of variables. In general, the optimal solution may depend on the unknown (conditional) distribution of the covariates. This article discusses the D A -maximin procedure to account for the uncertain distribution of the covariates. Sufficient conditions will be given under which the uniform design of a subset of independent discrete variables is D A -maximin. The sufficient conditions are formulated for Generalized Linear Mixed Models with an arbitrary number of quantitative and qualitative independent variables and random effects.  相似文献   

20.
In this paper, we consider the problem of empirical choice of optimal block sizes for block bootstrap estimation of population parameters. We suggest a nonparametric plug-in principle that can be used for estimating ‘mean squared error’-optimal smoothing parameters in general curve estimation problems, and establish its validity for estimating optimal block sizes in various block bootstrap estimation problems. A key feature of the proposed plug-in rule is that it can be applied without explicit analytical expressions for the constants that appear in the leading terms of the optimal block lengths. Furthermore, we also discuss the computational efficacy of the method and explore its finite sample properties through a simulation study.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号