首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
A loss function proposed by Wasan (1970) is well-fitted for a measure of inaccuracy for an estimator of a scale parameter of a distribution defined onR +=(0, ∞). We refer to this loss function as the K-loss function. A relationship between the K-loss and squared error loss functions is discussed. And an optimal estimator for a scale parameter with known coefficient of variation under the K-loss function is presented.  相似文献   

2.
The existence of values of the ridge parameter such that ridge regression is preferable to OLS by the Pitman nearness criterion under both the quadratic and the Fisher's loss is shown. Preference regions of the two estimators under the above loss functions are found. An upper bound for the value of the Pitman's measure of closeness, independent of a deterministic or stochastic choice of the ridge parameter, is given.  相似文献   

3.
This paper deals with the prblem of estimating simultaneously the parameters (Cell probabilities) of m ≤ 2 independent multinomial distributions, with respect to a quadratic loss functions. An empirical Bayes estimator is proposed which is shown to have smaller risk than the maximum likelihood estimator for sufficiently large values of mq, where q is a measure of the average diversity of the given multinomial populations. Some numerical results are given on the performance of the proposed estimator.  相似文献   

4.
Oracle Inequalities for Convex Loss Functions with Nonlinear Targets   总被引:1,自引:1,他引:0  
This article considers penalized empirical loss minimization of convex loss functions with unknown target functions. Using the elastic net penalty, of which the Least Absolute Shrinkage and Selection Operator (Lasso) is a special case, we establish a finite sample oracle inequality which bounds the loss of our estimator from above with high probability. If the unknown target is linear, this inequality also provides an upper bound of the estimation error of the estimated parameter vector. Next, we use the non-asymptotic results to show that the excess loss of our estimator is asymptotically of the same order as that of the oracle. If the target is linear, we give sufficient conditions for consistency of the estimated parameter vector. We briefly discuss how a thresholded version of our estimator can be used to perform consistent variable selection. We give two examples of loss functions covered by our framework.  相似文献   

5.
Loss functions express the loss to society, incurred through the use of a product, in monetary units. Underlying this concept is the notion that any deviation from target of any product characteristic implies a degradation in the product performance and hence a loss. Spiring (1993), in response to criticisms of the quadratic loss function, developed the reflected normal loss function, which is based on the normal density function. We give some modifications of these loss functions to simplify their application and provide a framework for the reflected normal loss function that accomodates a broader class of symmetric loss situations. These modifications also facilitate the unification of both of these loss functions and their comparison through expected loss. Finally, we give a simple method for determing the parameters of the modified reflected normal loss function based on loss information for multiple values of the product characteristic, and an example to illustrate the flexibility of the proposed model and the determination of its parameters.  相似文献   

6.
In this paper we seek designs and estimators which are optimal in some sense for multivariate linear regression on cubes and simplexes when the true regression function is unknown. More precisely, we assume that the unknown true regression function is the sum of a linear part plus some contamination orthogonal to the set of all linear functions in the L2 norm with respect to Lebesgue measure. The contamination is assumed bounded in absolute value and it is shown that the usual designs for multivariate linear regression on cubes and simplices and the usual least squares estimators minimize the supremum over all possible contaminations of the expected mean square error. Additional results for extrapolation and interpolation, among other things, are discussed. For suitable loss functions optimal designs are found to have support on the extreme points of our design space.  相似文献   

7.
Abstract

Recent work has emphasized the importance of evaluating estimates of a statistical functional (such as a conditional mean, quantile, or distribution) using a loss function that is consistent for the functional of interest, of which there is an infinite number. If forecasters all use correctly specified models free from estimation error, and if the information sets of competing forecasters are nested, then the ranking induced by a single consistent loss function is sufficient for the ranking by any consistent loss function. This article shows, via analytical results and realistic simulation-based analyses, that the presence of misspecified models, parameter estimation error, or nonnested information sets, leads generally to sensitivity to the choice of (consistent) loss function. Thus, rather than merely specifying the target functional, which narrows the set of relevant loss functions only to the class of loss functions consistent for that functional, forecast consumers or survey designers should specify the single specific loss function that will be used to evaluate forecasts. An application to survey forecasts of U.S. inflation illustrates the results.  相似文献   

8.
In this article Lindley's (1956) measure of average information is used to measure the loss of information due to the unavailability of a set of observations in an experiment. This measure of loss of information may be used to detect a set of most informative observations in a given design.  相似文献   

9.
This article introduces and discusses a new measure of the relative economic affluence (REA) between income distributions with different means. The REA measure D is applied to the U.S. white and black household income distributions of 1967 and 1979. The measure D shows that the REA of the white households with respect to the black households decreased from 1967 to 1979. This conclusion contrasts with those obtained by applications of distance or quasi-distance functions. It is shown in this study that REA measures and distance functions address different and relevant issues. An REA measure deals with the relation “more affluent than” and defines a partial strict ordering over the set of pairs of income distributions—that is, the relation is asymmetric and transitive—whereas a distance function accounts for the dissimilarity between distributions without imposing an ordering relation and hence fulfills the symmetry property.  相似文献   

10.
Abstract. Testing for parametric structure is an important issue in non‐parametric regression analysis. A standard approach is to measure the distance between a parametric and a non‐parametric fit with a squared deviation measure. These tests inherit the curse of dimensionality from the non‐parametric estimator. This results in a loss of power in finite samples and against local alternatives. This article proposes to circumvent the curse of dimensionality by projecting the residuals under the null hypothesis onto the space of additive functions. To estimate this projection, the smooth backfitting estimator is used. The asymptotic behaviour of the test statistic is derived and the consistency of a wild bootstrap procedure is established. The finite sample properties are investigated in a simulation study.  相似文献   

11.
The choice of smoothing determines the properties of nonparametric estimates of probability densities. In the discrimination problem, the choice is often tied to loss functions. A framework for the cross–validatory choice of smoothing parameters based on general loss functions is given. Several loss functions are considered as special cases. In particular, a family of loss functions, which is connected to discrimination problems, is directly related to measures of performance used in discrimination. Consistency results are given for a general class of loss functions which comprise this family of discriminant loss functions.  相似文献   

12.
In this paper, we study the E-Bayesian and hierarchical Bayesian estimations of the parameter derived from Pareto distribution under different loss functions. The definition of the E-Bayesian estimation of the parameter is provided. Moreover, for Pareto distribution, under the condition of the scale parameter is known, based on the different loss functions, formulas of the E-Bayesian estimation and hierarchical Bayesian estimations for the shape parameter are given, respectively, properties of the E-Bayesian estimation – (i) the relationship between of E-Bayesian estimations under different loss functions are provided, (ii) the relationship between of E-Bayesian and hierarchical Bayesian estimations under the same loss function are also provided, and using the Monte Carlo method simulation example is given. Finally, combined with the golfers income data practical problem are calculated, the results show that the proposed method is feasible and convenient for application.  相似文献   

13.
ABSTRACT

Robust parameter design, known as Taguchi’s design of experiments, are statistical optimization procedures designed to improve the quality of the functionality or quality characteristics of products or processes. In this article, we introduce a new performance measure based on asymmetric power loss functions for positive variables and discuss its applications to robust parameter design.  相似文献   

14.
A diagnostic key defines a hierarchial sequence of tests used to identify a specimen from a set of known taxa. The usual measure of the efficiency of a key, the expected number of tests per identification, may not be appropriate when the responses to tests are not known for all taxon/test pairs. An alternative measure is derived and it is shown that the test selected for use at each point in the sequence should depend on which measure is used. Two suggestions of Gower and Payne (1975), regarding test selection, are shown to be appropriate only to the new measure. Tests are usually selected by calculating the value of some selection criterion function. Functions are reviewed for use in each of the two situations, and new functions are derived. The functions discussed are shown to be interpretable in terms of the number of tests required to complete the key from the current point in the sequence, given that a particular test is selected. This interpretation enables the functions to be extended to select tests with different costs and allows recommendations to be made as to which function to use, depending on how many ‘good’ tests are available.  相似文献   

15.
Process capability (PC) indices measure the ability of a process of interest to meet the desired specifications under certain restrictions. There are a variety of capability indices available in literature for different interest variables such as weights, lengths, thickness, and the life time of items among many others. The goal of this article is to study the generalized capability indices from the Bayesian view point under different symmetric and asymmetric loss functions for the simple and mixture of generalized lifetime models. For our study purposes, we have covered a simple and two component mixture of Maxwell distribution as a special case of the generalized class of models. A comparative discussion of the PC with the mixture models under Laplace and inverse Rayleigh are also included. Bayesian point estimation of maintenance performance of the system is also part of the study (considering the Maxwell failure lifetime model and the repair time model). A real-life example is also included to illustrate the procedural details of the proposed method.  相似文献   

16.
Abstract. We study the problem of deciding which of two normal random samples, at least one of them of small size, has greater expected value. Unlike in the standard Bayesian approach, in which a single prior distribution and a single loss function are declared, we assume that a set of plausible priors and a set of plausible loss functions are elicited from the expert (the client or the sponsor of the analysis). The choice of the sample that has greater expected value is based on equilibrium priors, allowing for an impasse if for some plausible priors and loss functions choosing one and for others the other sample is associated with smaller expected loss.  相似文献   

17.
The main purpose of this paper is to formulate theories of universal optimality, in the sense that some criteria for performances of estimators are considered over a class of loss functions. It is shown that the difference of the second order terms between two estimators in any risk functions is expressed as a form which is characterized by a peculiar value associated with the loss functions, which is referred to as the loss coefficient. This means that the second order optimal problem is completely characterized by the value of the loss coefficient. Furthermore, from the viewpoint of change of the loss coefficient, the relationship between two estimators is classified into six types. On the basis of this classification, the concept of universal second order admissibility is introduced. Some sufficient conditions are given to determine whether any estimators are universally admissible or not.  相似文献   

18.
This paper treats the problem of comparing different evaluations of procedures which rank the variances of k normal populations. Procedures are evaluated on the basis of appropriate loss functions for a particular goal. The goal considered involves ranking the variances of k independent normal populations when the corresponding population means are unknown. The variances are ranked by selecting samples of size n from each population and using the sample variances to obtain the ranking. Our results extend those of various authors who looked at the narrower problem of evaluating the standard proceduv 3 associated with selecting the smallest of the population variances (see e.g.,P. Somerville (1975)).

Different loss functions (both parametric and non-parametric) appropriate to the particular goal under consideration are proposed. Procedures are evaluated by the performance of their risk over a particular preference zone. The sample size n, the least favorable parametric configuration, and the maximum value of the risk are three quantities studied for each procedure. When k is small these quantities, calculated by numerical simulation, show which loss functions respond better and which respond worse to increases in sample size. Loss functions are compared with one another according to the extent of this response. Theoretical results are given for the case of asymptotically large k. It is shown that for certain cases the error incurred by using these asymptotic results is small when k is only moderately large.

This work is an outgrowth of and extends that of J. Reeves and M.J. Sobel (1987) by comparing procedures on the basis of the sample size (perpopulation) required to obtain various bounds on the associated risk functions. New methodologies are developed to evaluate complete ranking procedures in different settings.  相似文献   

19.
A measure is the formal representation of the non-negative additive functions that abound in science. We review and develop the art of assigning Bayesian priors to measures. Where necessary, spatial correlation is delegated to correlating kernels imposed on otherwise uncorrelated priors. The latter must be infinitely divisible (ID) and hence described by the Lévy–Khinchin representation. Thus the fundamental object is the Lévy measure, the choice of which corresponds to different ID process priors. The general case of a Lévy measure comprising a mixture of assigned base measures leads to a prior process comprising a convolution of corresponding processes. Examples involving a single base measure are the gamma process, the Dirichlet process (for the normalized case) and the Poisson process. We also discuss processes that we call the supergamma and super-Dirichlet processes, which are double base measure generalizations of the gamma and Dirichlet processes. Examples of multiple and continuum base measures are also discussed. We conclude with numerical examples of density estimation.  相似文献   

20.
In this paper, the Bayes estimators for mean and square of mean ol a normal distribution with mean μ and vaiiance σ r2 (known), relative to LINEX loss function are obtained Comparisons in terms of risk functions and Bayes risks of those under LINEX loss and squared error loss functions with their respective alternative estimators viz, UMVUE and Bayes estimators relative to squared error loss function, are made. It is found that Bayes estimators relative to LINEX loss function dominate the alternative estimators m terms of risk function snd Bayes risk. It is also found that if t2 is unknown the Bayes estimators are still preferable over alternative estimators.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号