首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The calibration of forecasts for a sequence of events has an extensive literature. Since calibration does not ensure ‘good’ forecasts, the notion of refinement was introduced to provide a structure into which methods for comparing well-calibrated forecasters could be embedded.In this paper we apply these two concepts, calibration and refinement, to tree-structured statistical probability prediction systems by viewing predictions in terms of the expected value of a response variable given the values of a set of explanatory variables. When all of the variables are categorical, we show that, under suitable conditions, branching at the terminal node of a tree by adding another explanatory variable yields a tree with more refined predictions.  相似文献   

2.
Many of the existing methods of finding calibration intervals in simple linear regression rely on the inversion of prediction limits. In this article, we propose an alternative procedure which involves two stages. In the first stage, we find a confidence interval for the value of the explanatory variable which corresponds to the given future value of the response. In the second stage, we enlarge the confidence interval found in the first stage to form a confidence interval called, calibration interval, for the value of the explanatory variable which corresponds to the theoretical mean value of the future observation. In finding the confidence interval in the first stage, we have used the method based on hypothesis testing and percentile bootstrap. When the errors are normally distributed, the coverage probability of resulting calibration interval based on hypothesis testing is comparable to that of the classical calibration interval. In the case of non normal errors, the coverage probability of the calibration interval based on hypothesis testing is much closer to the target value than that of the calibration interval based on percentile bootstrap.  相似文献   

3.
In this paper, we derive some simple formulae to express the association between two random variables in the case of a linear relationship, One of these representations, the cube of the correlation coefficient, is given as the ratio of the skewness of the response variable to that of the explanatory variable. This result, along with other expressions of the correlation coefficient presented in this paper, has implications for choosing the response variable in a linear regression modelling.  相似文献   

4.
ABSTRACT

Ridge penalized least-squares estimators has been suggested as an alternative to the minimum penalized sum of squares estimates in the presence of collinearity among the explanatory variables in semiparametric regression models (SPRMs). This paper studies the local influence of minor perturbations on the ridge estimates in the SPRM. The diagnostics under the perturbation of ridge penalized sum of squares, response variable, explanatory variables and ridge parameter are considered. Some local influence diagnostics are given. A Monte Carlo simulation study and a real example are used to illustrate the proposed perturbations.  相似文献   

5.
An alternative graphical method, called the SSR plot, is proposed for use with a multiple regression model. The new method uses the fact that the sum of squares for regression (SSR) of two explanatory variables can be partitioned into the SSR of one variable and the increment in SSR due to the addition of the second variable. The SSR plot represents each explanatory variable as a vector in a half circle. Our proposed SSR plot explains that the explanatory variables corresponding to the vectors located closer to the horizontal axis have stronger effects on the response variable. Furthermore, for a regression model with two explanatory variables, the magnitude of the angle between two vectors can be used to identify suppression.  相似文献   

6.
Consider a vector valued response variable related to a vector valued explanatory variable through a normal multivariate linear model. The multivariate calibration problem deals with statistical inference on unknown values of the explanatory variable. The problem addressed is the construction of joint confidence regions for several unknown values of the explanatory variable. The problem is investigated when the variance covariance matrix is a scalar multiple of the identity matrix and also when it is a completely unknown positive definite matrix. The problem is solved in only two cases: (i) the response and explanatory variables have the same dimensions, and (ii) the explanatory variable is a scalar. In the former case, exact joint confidence regions are derived based on a natural pivot statistic. In the latter case, the joint confidence regions are only conservative. Computational aspects and the practical implementation of the confidence regions are discussed and illustrated using an example.  相似文献   

7.
In this paper a new family of test statistics is presented for testing the independence between the binary response Y and an ordered categorical explanatory variable X (doses) against the alternative hypothesis of an increase dose-response relationship between a response variable Y and X (doses). The properties of these test statistics are studied. This new family of test statistics is based on the family of φ-divergence measures and contains as a particular case the likelihood ratio test. We pay special attention to the family of test statistics associated with the power divergence family. A simulation study is included in order to analyze the behavior of the power divergence family of test statistics.  相似文献   

8.
Suppose that the conditional density of a response variable given a vector of explanatory variables is parametrically modelled, and that data are collected by a two-phase sampling design. First, a simple random sample is drawn from the population. The stratum membership in a finite number of strata of the response and explanatory variables is recorded for each unit. Second, a subsample is drawn from the phase-one sample such that the selection probability is determined by the stratum membership. The response and explanatory variables are fully measured at this phase. We synthesize existing results on nonparametric likelihood estimation and present a streamlined approach for the computation and the large sample theory of profile likelihood in four different situations. The amount of information in terms of data and assumptions varies depending on whether the phase-one data are retained, the selection probabilities are known, and/or the stratum probabilities are known. We establish and illustrate numerically the order of efficiency among the maximum likelihood estimators, according to the amount of information utilized, in the four situations.  相似文献   

9.
In many experiments, not all explanatory variables can be controlled. When the units arise sequentially, different approaches may be used. The authors study a natural sequential procedure for “marginally restricted” D‐optimal designs. They assume that one set of explanatory variables (x1) is observed sequentially, and that the experimenter responds by choosing an appropriate value of the explanatory variable x2. In order to solve the sequential problem a priori, the authors consider the problem of constructing optimal designs with a prior marginal distribution for x1. This eliminates the influence of units already observed on the next unit to be designed. They give explicit designs for various cases in which the mean response follows a linear regression model; they also consider a case study with a nonlinear logistic response. They find that the optimal strategy often consists of randomizing the assignment of the values of x2.  相似文献   

10.
Abstract

In this paper, under the assumption of linear relationship between two variables we provide alternative simple method of proving the existing result connecting correlation coefficient with those of skewness of response and explanatory variables. Further we have given a relationship between correlation coefficient and coefficient of kurtosis of response and explanatory variables assuming the linear relationship between the two variables. Simple alternative way of deriving the formula, which helps in finding the direction dependence in linear regression, is discussed.  相似文献   

11.
Summary Nonsymmetric correspondence analysis is a model meant for the analysis of the dependence in a two-way continengy table, and is an alternative to correspondence analysis. Correspondence analysis is based on the decomposition of Pearson's Ф2-index Nonsymmetric correspondence analysis is based on the decomposition of Goodman-Kruskal's τ-index for predicatablity. In this paper, we approach nonsymmetric correspondence analysis as a statistical model based on a probability distribution. We provide algorithms for the maximum likelihood and the least-squares estimation with linear constraints upon model parameters. The nonsymmetric correspondence analysis model has many properties that can be useful for prediction analysis in contingency tables. Predictability measures are introduced to identify the categories of the response variable that can be best predicted, as well as the categories of the explanatory variable having the highest predictability power. We describe the interpretation of model parameters in two examples. In the end, we discuss the relations of nonsymmetric correspondence analysis with other reduced-rank models.  相似文献   

12.
In this paper we develop nonparametric methods for regression analysis when the response variable is subject to censoring and/or truncation. The development is based on a data completion princple that enables us to apply, via an iterative scheme, nonparametric regression techniques to iteratively com¬pleted data from a given sample with censored and/or truncated observations. In particular, locally weighted regression smoothers and additive regression models are extended to left-truncated and right-censored data Nonparamet¬ric regression analysis is applied to the Stanford heart transplant data, which have been analyzed by previous authors using semiparametric regression meth¬ods. and provides new insights into the relationship between expected survival time after a heart transplant and explanatory variables.  相似文献   

13.
Guimei Zhao 《Statistics》2017,51(3):609-614
In this paper, we deal with the hypothesis testing problems for the univariate linear calibration, where a normally distributed response variable and an explanatory variable are involved, and the observations of the response variable corresponding to known values of the explanatory variable are used for making inferences concerning a single unknown value of the explanatory variable. The uniformly most powerful unbiased tests for both one-sided and two-sided hypotheses are constructed and verified. The power behaviour of the proposed tests is numerically compared with that of the existing method, and simulations show that the proposed tests make the powers improved.  相似文献   

14.
Generalized additive models for location, scale and shape   总被引:10,自引:0,他引:10  
Summary.  A general class of statistical models for a univariate response variable is presented which we call the generalized additive model for location, scale and shape (GAMLSS). The model assumes independent observations of the response variable y given the parameters, the explanatory variables and the values of the random effects. The distribution for the response variable in the GAMLSS can be selected from a very general family of distributions including highly skew or kurtotic continuous and discrete distributions. The systematic part of the model is expanded to allow modelling not only of the mean (or location) but also of the other parameters of the distribution of y , as parametric and/or additive nonparametric (smooth) functions of explanatory variables and/or random-effects terms. Maximum (penalized) likelihood estimation is used to fit the (non)parametric models. A Newton–Raphson or Fisher scoring algorithm is used to maximize the (penalized) likelihood. The additive terms in the model are fitted by using a backfitting algorithm. Censored data are easily incorporated into the framework. Five data sets from different fields of application are analysed to emphasize the generality of the GAMLSS class of models.  相似文献   

15.
In this contribution we aim at improving ordinal variable selection in the context of causal models for credit risk estimation. In this regard, we propose an approach that provides a formal inferential tool to compare the explanatory power of each covariate and, therefore, to select an effective model for classification purposes. Our proposed model is Bayesian nonparametric thus keeps the amount of model specification to a minimum. We consider the case in which information from the covariates is at the ordinal level. A noticeable instance of this regards the situation in which ordinal variables result from rankings of companies that are to be evaluated according to different macro and micro economic aspects, leading to ordinal covariates that correspond to various ratings, that entail different magnitudes of the probability of default. For each given covariate, we suggest to partition the statistical units in as many groups as the number of observed levels of the covariate. We then assume individual defaults to be homogeneous within each group and heterogeneous across groups. Our aim is to compare and, therefore select, the partition structures resulting from the consideration of different explanatory covariates. The metric we choose for variable comparison is the calculation of the posterior probability of each partition. The application of our proposal to a European credit risk database shows that it performs well, leading to a coherent and clear method for variable averaging of the estimated default probabilities.  相似文献   

16.
An intrinsic association matrix is introduced to measure category-to-variable association based on proportional reduction of prediction error by an explanatory variable. The normalization of the diagonal gives rise to the expected rates of error-reduction and the off-diagonal yields expected distributions of the rates of error for all response categories. A general framework of association measures based on the proposed matrix is established using an application-specific weight vector. A hierarchy of equivalence relations defined by the association matrix and vector is shown. Applications to financial and survey data together with simulation results are presented.  相似文献   

17.
Regression analysis is one of the most used statistical methods for data analysis. There are, however, many situations in which one cannot base inference solely on f ( y ∣ x ; β), the conditional probability (density) function for the response variable Y , given x , the covariates. Examples include missing data where the missingness is non-ignorable, sampling surveys in which subjects are selected on the basis of the Y -values and meta-analysis where published studies are subject to 'selection bias'. The conventional approaches require the correct specification of the missingness mechanism, sampling probability and probability for being published respectively. In this paper, we propose an alternative estimating procedure for β based on an idea originated by Kalbfleisch. The novelty of this method is that no assumption on the missingness probability mechanisms etc. mentioned above is required to be specified. Asymptotic efficiency calculations and simulation studies were conducted to compare the method proposed with the two existing methods: the conditional likelihood and the weighted estimating function approaches.  相似文献   

18.
This article investigates case-deletion influence analysis via Cook’s distance and local influence analysis via conformal normal curvature for partially linear models with response missing at random. Local influence approach is developed to assess the sensitivity of parameter and nonparametric estimators to various perturbations such as case-weight, response variable, explanatory variable, and parameter perturbations on the basis of semiparametric estimating equations, which are constructed using the inverse probability weighted approach, rather than likelihood function. Residual and generalized leverage are also defined. Simulation studies and a dataset taken from the AIDS Clinical Trials are used to illustrate the proposed methods.  相似文献   

19.
We investigated CART performance with a unimodal response curve for one continuous response and four continuous explanatory variables, where two variables were important (i.e. directly related to the response) and the other two were not. We explored performance under three relationship strengths and two explanatory variable conditions: equal importance and one variable four times as important as the other. We compared CART variable selection performance using three tree-selection rules ('minimum risk', 'minimum risk complexity', 'one standard error') to stepwise polynomial ordinary least squares (OLS) under four sample size conditions. The one-standard-error and minimum risk-complexity methods performed about as well as stepwise OLS with large sample sizes when the relationship was strong. With weaker relationships, equally important explanatory variables and larger sample sizes, the one-standard-error and minimum-risk-complexity rules performed better than stepwise OLS. With weaker relationships and explanatory variables of unequal importance, tree-structured methods did not perform as well as stepwise OLS. Comparing performance within tree-structured methods, with a strong relationship and equally important explanatory variables, the one-standard-error rule was more likely to choose the correct model than were the other tree-selection rules. The minimum-risk-complexity rule was more likely to choose the correct model than were the other tree-selection rules (1) with weaker relationships and equally important explanatory variables; and (2) under all relationship strengths when explanatory variables were of unequal importance and sample sizes were lower.  相似文献   

20.
The suitability of a normal linear regression model may require transformation of the original response, and transformation diagnostics are designed to detect the need for such transformation. A common approach to transformation diagnostics is to construct an artificial explanatory variable, which is then tested in the augmented linear regression model for the original response. This paper describes corresponding diagnostics based directly on score statistics with accurate approximations for their standard errors. Several transformation models are covered. Some numerical illustrations are given.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号