首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
An attempt of combining several optimality criteria simulaneously by using the techniques of nonliear programming is demonstrated. Four constrained D- and G-optimality criteria are introduced, namely, D-restrcted, Ds-restricted, A-restricted and E-restricted D- and G-optimality. The emphasis is particularly on the polynomial regression. Examples for quadratic polynomial regression are investigated to illustrate the applicability of these constrained optimality criteria.  相似文献   

2.
We consider the problem of estimating a vector interesting parameter in the presence of nuisance parameters through vector unbiased statistical estimation functions (USEFs). An extension of the Cramer—Rao inequality relevant to the present problem is obtained. Three possible optimality criteria in the class of regular vector USEFs are those based on (i) the non-negative definiteness of the difference of dispersion matrices (ii) the trace of the dispersion matrix and (iii) the determinant of the dispersion matrix. We refer to these three criteria as M-optimality, T- optimality and D-optimality respectively. The equivalence of these three optimality criteria is established. By restricting the class of regular USEFs considered by Ferreira (1982), we study some interesting properties of the standardized USEFs and establish essential uniqueness of standardized M-optimal USEF in this restricted class. Finally some illustrative examples are included.  相似文献   

3.
This paper provides methods of obtaining Bayesian D-optimal Accelerated Life Test (ALT) plans for series systems with independent exponential component lives under the Type-I censoring scheme. Two different Bayesian D-optimality design criteria are considered. For both the criteria, first optimal designs for a given number of experimental points are found by solving a finite-dimensional constrained optimization problem. Next, the global optimality of such an ALT plan is ensured by applying the General Equivalence Theorem. A detailed sensitivity analysis is also carried out to investigate the effect of different planning inputs on the resulting optimal ALT plans. Furthermore, these Bayesian optimal plans are also compared with the corresponding (frequentist) locally D-optimal ALT plans.  相似文献   

4.
We consider the problem of making statistical inference on unknown parameters of a lognormal distribution under the assumption that samples are progressively censored. The maximum likelihood estimates (MLEs) are obtained by using the expectation-maximization algorithm. The observed and expected Fisher information matrices are provided as well. Approximate MLEs of unknown parameters are also obtained. Bayes and generalized estimates are derived under squared error loss function. We compute these estimates using Lindley's method as well as importance sampling method. Highest posterior density interval and asymptotic interval estimates are constructed for unknown parameters. A simulation study is conducted to compare proposed estimates. Further, a data set is analysed for illustrative purposes. Finally, optimal progressive censoring plans are discussed under different optimality criteria and results are presented.  相似文献   

5.
We formulate in a reasonable sense a class of optimality functionals for comparing feasible statistical designs available in a given setup. It is desired that the optimality functionals reflect symmetric measures of the lack of information contained in the designs being compared. In view of this, Kiefer's (1975) universal optimality criterion is seen to rest on stringent conditions, some of which can be relaxed while preserving optimality (in an extended sense) of the so-called balanced designs.  相似文献   

6.
Maximum likelihood estimation of a mean and a covariance matrix whose structure is constrained only to general positive semi-definiteness is treated in this paper. Necessary and sufficient conditions for the local optimality of mean and covariance matrix estimates are given. Observations are assumed to be independent. When the observations are also assumed to be identically distributed, the optimality conditions are used to obtain the mean and covariance matrix solutions in closed form. For the nonidentically distributed observation case, a general numerical technique which integrates scoring and Newton's iterations to solve the optimality condition equations is presented, and convergence performance is examined.  相似文献   

7.
《Econometric Reviews》2013,32(3):215-228
Abstract

Decisions based on econometric model estimates may not have the expected effect if the model is misspecified. Thus, specification tests should precede any analysis. Bierens' specification test is consistent and has optimality properties against some local alternatives. A shortcoming is that the test statistic is not distribution free, even asymptotically. This makes the test unfeasible. There have been many suggestions to circumvent this problem, including the use of upper bounds for the critical values. However, these suggestions lead to tests that lose power and optimality against local alternatives. In this paper we show that bootstrap methods allow us to recover power and optimality of Bierens' original test. Bootstrap also provides reliable p-values, which have a central role in Fisher's theory of hypothesis testing. The paper also includes a discussion of the properties of the bootstrap Nonlinear Least Squares Estimator under local alternatives.  相似文献   

8.
An experimental design is said to be Schur optimal, if it is optimal with respect to the class of all Schur isotonic criteria, which includes Kiefer's criteria of ΦpΦp-optimality, distance optimality criteria and many others. In the paper we formulate an easily verifiable necessary and sufficient condition for Schur optimality in the set of all approximate designs of a linear regression experiment with uncorrelated errors. We also show that several common models admit a Schur optimal design, for example the trigonometric model, the first-degree model on the Euclidean ball, and the Berman's model.  相似文献   

9.
To improve the out-of-sample performance of the portfolio, Lasso regularization is incorporated to the Mean Absolute Deviance (MAD)-based portfolio selection method. It is shown that such a portfolio selection problem can be reformulated as a constrained Least Absolute Deviance problem with linear equality constraints. Moreover, we propose a new descent algorithm based on the ideas of ‘nonsmooth optimality conditions’ and ‘basis descent direction set’. The resulting MAD-Lasso method enjoys at least two advantages. First, it does not involve the estimation of covariance matrix that is difficult particularly in the high-dimensional settings. Second, sparsity is encouraged. This means that assets with weights close to zero in the Markovwitz's portfolio are driven to zero automatically. This reduces the management cost of the portfolio. Extensive simulation and real data examples indicate that if the Lasso regularization is incorporated, MAD portfolio selection method is consistently improved in terms of out-of-sample performance, measured by Sharpe ratio and sparsity. Moreover, simulation results suggest that the proposed descent algorithm is more time-efficient than interior point method and ADMM algorithm.  相似文献   

10.
This paper proposes an adaptive model selection criterion with a data-driven penalty term. We treat model selection as an equality constrained minimization problem and develop an adaptive model selection procedure based on the Lagrange optimization method. In contrast to Akaike's information criterion (AIC), Bayesian information criterion (BIC) and most other existing criteria, this new criterion is to minimize the model size and take a measure of lack-of-fit as an adaptive penalty. Both theoretical results and simulations illustrate the power of this criterion with respect to consistency and pointwise asymptotic loss efficiency in the parametric and nonparametric cases.  相似文献   

11.
Chandrasekar and Kale (1984) considered the problem of estimating a vector interesting parameter in the presence of nuisance parameters through vector unbiased statistical estimation functions (USEFs) and obtained an extension of the Cramér-Rao inequality. Based on this result, three optimality criteria were proposed and their equivalence was established. In this paper, motivated by the uniformly minimum risk criterion (Zacks, 1971, p. 102) for estimators, we propose a new optimality criterion for vector USEFs in the nuisance parameter case and show that it is equivalent to the three existing criteria.  相似文献   

12.
In this paper, we investigate the problem of determining block designs which are optimal under type 1 optimality criteria within various classes of designs having υ treatments arranged in b blocks of size k. The solutions to two optimization problems are given which are related to a general result obtained by Cheng (1978) and which are useful in this investigation. As one application of the solutions obtained, the definition of a regular graph design given in Mitchell and John (1977) is extended to that of a semi-regular graph design and some sufficient conditions are derived for the existence of a semi-regular graph design which is optimal under a given type 1 criterion. A result is also given which shows how the sufficient conditions derived can be used to establish the optimality under a specific type 1 criterion of some particular types of semi- regular graph designs having both equal and unequal numbers of replicates. Finally,some sufficient conditions are obtained for the dual of an A- or D-optimal design to be A- or D-optimal within an appropriate class of dual designs.  相似文献   

13.
In clinical trials, several competing treatments are often carried out in the same trial period. The goal is to assess the performances of these different treatments according to some optimality criterion and minimize risks to the patients in the entire process of the study. For this, each coming patient is allocated sequentially to one of the treatments according to a mechanism defined by the optimality criterion. In practice, sometimes different optimality criteria, or the same criterion with different regimes, need to be considered to assess the treatments in the same study, so that each mechanism is also evaluated through the trail study. In this case, the question is how to allocate the treatments to the incoming patients so that the criteria/mechanisms of interest are assessed during the trail process, and the overall performance of the trial is optimized under the combined criteria or regimes. In this paper, we consider this problem by investigating a compound adaptive generalized Pólya urn design. Basic asymptotic properties of this design are also studied.  相似文献   

14.
The purpose of this article is two-fold. First, we find it very interesting to explore a kind of notion of optimality of the customary Jensen-bound among all Jensen-type bounds. Without this result, the customary Jensen-bound stood alone simply as just another bound. The proposed notion and the associated optimality are important given that in some situations the Jensen's inequality does leave us empty handed.

When it comes to highlighting Jensen's inequality, unfortunately only a handful of nearly routine applications continues to recycle time after time. Such encounters rarely produce any excitement. This article may change that outlook given its second underlying purpose, which is to introduce a variety of unusual applications of Jensen's inequality. The collection of our important and useful applications and their derivations are new.  相似文献   

15.
The authors construct locally optimal designs for the proportional odds model for ordinal data. While they investigate the standard D‐optimal design, they also investigate optimality criteria for the simultaneous estimation of multiple quantiles, namely DA ‐optimality and the omnibus criterion. The design of experiments for the simultaneous estimation of multiple quantiles is important in both toxic and effective dose studies in medicine. As with c‐optimality in the binary response problem, the authors find that there are distinct phase changes when exploring extreme quantiles that require additional design points. The authors also investigate relative efficiencies of the criteria.  相似文献   

16.
In past studies various criteria have been proposed for evaluating the performance of a confidence set. However, each of these criteria often causes some unsatisfactory results even for the standard models such as location model, scale model and multinormal model. In this article, we propose a new criterion so that the procedure of the confidence set estimation based on the criterion can lead to a desirable confidence set at least for the above models. The approach is on the basis of an improvement of the Neyman shortness according to two steps. The first step is some kind of theoretical improvement, referring to a proposal of Pratt. As a result, we get a solution to Pratt's paradox. In the second step, we adopt a kind of robust or minimax procedure without sticking to the uniform optimality. In conclusion, it is shown that the procedure based on our criterion produces a desirable and acceptable confidence set.  相似文献   

17.
Let D be a saturated fractional factorial design of the general K1 x K2 ...x Kt factorial such that it consists of m distinct treatment combinations and it is capable of providing an unbiased estimator of a subvector of m factorial parameters under the assumption that the remaining k-m,t (k = H it ) factorial parameters are negligible. Such a design will not provide an unbiased estimator of the varianceσ2 Suppose that D is an optimal design with respect to some optimality criterion (e.g. d-optimality, a-optimality or e-optimality) and it is desirable to augment D with c treatmentcombinations with the aim to estimate 2 Suppose that D is an optimal design with respect to some optimality criterion (e.g. d-optimality, a-optimality or e-optimality) and it is desirable to augment D with c treatment combinations with the aim to estimate σ2 unbiasedly. The problem then is how to select the c treatment combinations such that the augmented design D retains its optimality property. This problem, in all its generality is extremely complex. The objective of this paper is to provide some insight in the problem by providing a partial answer in the case of the 2tfactorial, using the d-optimality criterion.  相似文献   

18.
J. Gladitz  J. Pilz 《Statistics》2013,47(3):371-385
We consider the problem of optimal experimental design in random coefficient regression models with respect to a quadratic loss function. By application of WHITTLE'S general equivalence theorem we obtain the structure of optimal designs. An alogrithm is given which allows, under certain assumptions, the construction of the information matrix of an optimal design. Moreover, we give conditions on the equivalence of optimal designs with respect to optimality criteria which are analogous to usual A-D- and _E/-optimality.  相似文献   

19.
Motivated by the Basel Capital Accord Requirement (CAR), we analyze a risk control portfolio selection problem under exponential utility when a banker faces both Brownian and jump risks. The banker's risk process and the dynamics of the risky asset process are modeled as jump-diffusion processes. Assuming that the constraint set of all trading strategies is in a closed set, we study the terminal utility optimization problem via the backward stochastic differential equation (BSDE) under risk regulation paradigm. We construct the BSDE by means of the martingale optimality principle, giving conditions for the corresponding generator to be well defined in order to derive the bounds on the candidate optimal strategy. We then construct an internal model for the bank under Basel III CAR, which is formulated from the total risk-weighted assets (TRWA's) and bank capital. The results obtained from this model can be adopted within the banking sector when setting up asset investment strategies and advanced risk management models, as advocated by the Basel III Accord.  相似文献   

20.
We focus on the problem of selection of a subset of the variables so as to preserve the multivariate data structure that a principal-components analysis of the initial variables would reveal. We propose a new method based on some adapted Gaussian graphical models. This method is then compared with those developed by Bonifas et al. (1984) and Krzanowski (1987a, b). It appears that the criteria for all methods consider the same correlation submatrices and often lead to similar results. The proposed approach offers some guidance as to the number of variables to be selected. In particular, Akaike's information criterion is used.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号