首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
We consider simulation-based methods for exploration and maximization of expected utility in sequential decision problems. We consider problems which require backward induction with analytically intractable expected utility integrals at each stage. We propose to use forward simulation to approximate the integral expressions, and a reduction of the allowable action space to avoid problems related to an increasing number of possible trajectories in the backward induction. The artificially reduced action space allows strategies to depend on the full history of earlier observations and decisions only indirectly through a low dimensional summary statistic. The proposed rule provides a finite-dimensional approximation to the unrestricted infinite-dimensional optimal decision rule. We illustrate the proposed approach with an application to an optimal stopping problem in a clinical trial.  相似文献   

2.
Let X1,…,Xr?1,Xr,Xr+1,…,Xn be independent, continuous random variables such that Xi, i = 1,…,r, has distribution function F(x), and Xi, i = r+1,…,n, has distribution function F(x?Δ), with -∞ <Δ< ∞. When the integer r is unknown, this is refered to as a change point problem with at most one change. The unknown parameter Δ represents the magnitude of the change and r is called the changepoint. In this paper we present a general review discussion of several nonparametric approaches for making inferences about r and Δ.  相似文献   

3.
4.
The D‐optimal minimax criterion is proposed to construct fractional factorial designs. The resulting designs are very efficient, and robust against misspecification of the effects in the linear model. The criterion was first proposed by Wilmut & Zhou (2011); their work is limited to two‐level factorial designs, however. In this paper we extend this criterion to designs with factors having any levels (including mixed levels) and explore several important properties of this criterion. Theoretical results are obtained for construction of fractional factorial designs in general. This minimax criterion is not only scale invariant, but also invariant under level permutations. Moreover, it can be applied to any run size. This is an advantage over some other existing criteria. The Canadian Journal of Statistics 41: 325–340; 2013 © 2013 Statistical Society of Canada  相似文献   

5.
The performance of nonparametric function estimates often depends on the choice of design points. Based on the mean integrated squared error criterion, we propose a sequential design procedure that updates the model knowledge and optimal design density sequentially. The methodology is developed under a general framework covering a wide range of nonparametric inference problems, such as conditional mean and variance functions, the conditional distribution function, the conditional quantile function in quantile regression, functional coefficients in varying coefficient models and semiparametric inferences. Based on our empirical studies, nonparametric inference based on the proposed sequential design is more efficient than the uniform design and its performance is close to the true but unknown optimal design. The Canadian Journal of Statistics 40: 362–377; 2012 © 2012 Statistical Society of Canada  相似文献   

6.
In this article, we consider the problem of seeking locally optimal designs for nonlinear dose‐response models with binary outcomes. Applying the theory of Tchebycheff Systems and other algebraic tools, we show that the locally D‐, A‐, and c‐optimal designs for three binary dose‐response models are minimally supported in finite, closed design intervals. The methods to obtain such designs are presented along with examples. The efficiencies of these designs are also discussed. The Canadian Journal of Statistics 46: 336–354; 2018 © 2018 Statistical Society of Canada  相似文献   

7.
In this paper, a Bayesian two-stage D–D optimal design for mixture experimental models under model uncertainty is developed. A Bayesian D-optimality criterion is used in the first stage to minimize the determinant of the posterior variances of the parameters. The second stage design is then generated according to an optimalityprocedure that collaborates with the improved model from the first stage data. The results show that a Bayesian two-stage D–D-optimal design for mixture experiments under model uncertainty is more efficient than both the Bayesian one-stage D-optimal design and the non-Bayesian one-stage D-optimal design in most situations. Furthermore, simulations are used to obtain a reasonable ratio of the sample sizes between the two stages.  相似文献   

8.
There are several levels of sophistication when specifying the bandwidth matrix H to be used in a multivariate kernel density estimator, including H to be a positive multiple of the identity matrix, a diagonal matrix with positive elements or, in its most general form, a symmetric positive‐definite matrix. In this paper, the author proposes a data‐based method for choosing the smoothing parametrization to be used in the kernel density estimator. The procedure is fully illustrated by a simulation study and some real data examples. The Canadian Journal of Statistics © 2009 Statistical Society of Canada  相似文献   

9.
It is shown that within the class of connected binary designs with arbitrary block sizes and arbitrary replications only a symmetic balanced incomplete block design produces a completely symmetric information matrix for the treatment effects whenever the number of blocks is equal to the number of treatments and the number of experimental units is an integer multiple of the number of treatments. Such a design is known to be universally optimal.  相似文献   

10.
The use of covariates in block designs is necessary when the covariates cannot be controlled like the blocking factor in the experiment. In this paper, we consider the situation where there is some flexibility for selection in the values of the covariates. The choice of values of the covariates for a given block design attaining minimum variance for estimation of each of the parameters has attracted attention in recent times. Optimum covariate designs in simple set-ups such as completely randomised design (CRD), randomised block design (RBD) and some series of balanced incomplete block design (BIBD) have already been considered. In this paper, optimum covariate designs have been considered for the more complex set-ups of different partially balanced incomplete block (PBIB) designs, which are popular among practitioners. The optimum covariate designs depend much on the methods of construction of the basic PBIB designs. Different combinatorial arrangements and tools such as orthogonal arrays, Hadamard matrices and different kinds of products of matrices viz. Khatri–Rao product, Kronecker product have been conveniently used to construct optimum covariate designs with as many covariates as possible.  相似文献   

11.
Two sufficient conditions are given for an incomplete block design to be (M,S- optimal. For binary designs the conditions are (i) that the elements in each row, excluding the diagonal element, of the association matrix differ by at most one, and (ii) that the off-diagonal elements of the block characteristic matrix differ by at most one. It is also shown how the conditions can be utilized for nonbinary designs and that for blocks of size two the sufficient condition in terms of the association matrix can be attained.  相似文献   

12.
In this paper, we present a test of independence between the response variable, which can be discrete or continuous, and a continuous covariate after adjusting for heteroscedastic treatment effects. The method involves first augmenting each pair of the data for all treatments with a fixed number of nearest neighbours as pseudo‐replicates. Then a test statistic is constructed by taking the difference of two quadratic forms. The statistic is equivalent to the average lagged correlations between the response and nearest neighbour local estimates of the conditional mean of response given the covariate for each treatment group. This approach effectively eliminates the need to estimate the nonlinear regression function. The asymptotic distribution of the proposed test statistic is obtained under the null and local alternatives. Although using a fixed number of nearest neighbours pose significant difficulty in the inference compared to that allowing the number of nearest neighbours to go to infinity, the parametric standardizing rate for our test statistics is obtained. Numerical studies show that the new test procedure has robust power to detect nonlinear dependency in the presence of outliers that might result from highly skewed distributions. The Canadian Journal of Statistics 38: 408–433; 2010 © 2010 Statistical Society of Canada  相似文献   

13.
In the usual two-way layout of ANOVA (interactions are admitted) let nij ? 1 be the number of observations for the factor-level combination(i, j). For testing the hypothesis that all main effects of the first factor vanish numbers n1ij are given such that the power function of the F-test is uniformly maximized (U-optimality), if one considers only designs (nij) for which the row-sums ni are prescribed. Furthermore, in the (larger) set of all designs for which the total number of observations is given, all D-optimum designs are constructed.  相似文献   

14.
The authors derive closed‐form expressions for the full, profile, conditional and modified profile likelihood functions for a class of random growth parameter models they develop as well as Garcia's additive model. These expressions facilitate the determination of parameter estimates for both types of models. The profile, conditional and modified profile likelihood functions are maximized over few parameters to yield a complete set of parameter estimates. In the development of their random growth parameter models the authors specify the drift and diffusion coefficients of the growth parameter process in a natural way which gives interpretive meaning to these coefficients while yielding highly tractable models. They fit several of their random growth parameter models and Garcia's additive model to stock market data, and discuss the results. The Canadian Journal of Statistics 38: 474–487; 2010 © 2010 Statistical Society of Canada  相似文献   

15.
When Shannon entropy is used as a criterion in the optimal design of experiments, advantage can be taken of the classical identity representing the joint entropy of parameters and observations as the sum of the marginal entropy of the observations and the preposterior conditional entropy of the parameters. Following previous work in which this idea was used in spatial sampling, the method is applied to standard parameterized Bayesian optimal experimental design. Under suitable conditions, which include non-linear as well as linear regression models, it is shown in a few steps that maximizing the marginal entropy of the sample is equivalent to minimizing the preposterior entropy, the usual Bayesian criterion, thus avoiding the use of conditional distributions. It is shown using this marginal formulation that under normality assumptions every standard model which has a two-point prior distribution on the parameters gives an optimal design supported on a single point. Other results include a new asymptotic formula which applies as the error variance is large and bounds on support size.  相似文献   

16.
The evaluation of new processor designs is an important issue in electrical and computer engineering. Architects use simulations to evaluate designs and to understand trade‐offs and interactions among design parameters. However, due to the lengthy simulation time and limited resources, it is often practically impossible to simulate a full factorial design space. Effective sampling methods and predictive models are required. In this paper, the authors propose an automated performance predictive approach which employs an adaptive sampling scheme that interactively works with the predictive model to select samples for simulation. These samples are then used to build Bayesian additive regression trees, which in turn are used to predict the whole design space. Both real data analysis and simulation studies show that the method is effective in that, though sampling at very few design points, it generates highly accurate predictions on the unsampled points. Furthermore, the proposed model provides quantitative interpretation tools with which investigators can efficiently tune design parameters in order to improve processor performance. The Canadian Journal of Statistics 38: 136–152; 2010 © 2010 Statistical Society of Canada  相似文献   

17.
18.
19.
In this article we investigate the problem of ascertaining A- and D-optimal designs in a cubic regression model with random coefficients. Our interest lies in estimation of all the parameters or in only those except the intercept term. Assuming the variance ratios to be known, we tabulate D-optimal designs for various combinations of the variance ratios. A-optimality does not pose any new problem in the random coefficients situation.  相似文献   

20.
The authors consider the problem of simultaneous transformation and variable selection for linear regression. They propose a fully Bayesian solution to the problem, which allows averaging over all models considered including transformations of the response and predictors. The authors use the Box‐Cox family of transformations to transform the response and each predictor. To deal with the change of scale induced by the transformations, the authors propose to focus on new quantities rather than the estimated regression coefficients. These quantities, referred to as generalized regression coefficients, have a similar interpretation to the usual regression coefficients on the original scale of the data, but do not depend on the transformations. This allows probabilistic statements about the size of the effect associated with each variable, on the original scale of the data. In addition to variable and transformation selection, there is also uncertainty involved in the identification of outliers in regression. Thus, the authors also propose a more robust model to account for such outliers based on a t‐distribution with unknown degrees of freedom. Parameter estimation is carried out using an efficient Markov chain Monte Carlo algorithm, which permits moves around the space of all possible models. Using three real data sets and a simulated study, the authors show that there is considerable uncertainty about variable selection, choice of transformation, and outlier identification, and that there is advantage in dealing with all three simultaneously. The Canadian Journal of Statistics 37: 361–380; 2009 © 2009 Statistical Society of Canada  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号