首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Sensitivity analysis in regression is concerned with assessing the sensitivity of the results of a regression model (e.g., the objective function, the regression parameters, and the fitted values) to changes in the data. Sensitivity analysis in least squares linear regression has seen a great surge of research activities over the last three decades. By contrast, sensitivity analysis in non-linear regression has received very little attention. This paper deals with the problem of local sensitivity analysis in non-linear regression. Closed-form general formulas are provided for the sensitivities of three standard methods for the estimation of the parameters of a non-linear regression model based on a set of data. These methods are the least squares, the minimax, and the least absolute value methods. The effectiveness of the proposed measures is illustrated by application to several non-linear models including the ultrasonic data and the onion yield data. The proposed sensitivity measures are shown to deal effectively with the detection of influential observations in non-linear regression models.  相似文献   

2.
Abstract

This paper deals with the problem of local sensitivity analysis in regression, i.e., how sensitive the results of a regression model (objective function, parameters, and dual variables) are to changes in the data. We use a general formula for local sensitivities in optimization problems to calculate the sensitivities in three standard regression problems (least squares, minimax, and least absolute values). Closed formulas for all sensitivities are derived. Sensitivity contours are presented to help in assessing the sensitivity of each observation in the sample. The dual problems of the minimax and least absolute values are obtained and interpreted. The proposed sensitivity measures are shown to deal more effectively with the masking problem than the existing methods. The methods are illustrated by their application to some examples and graphical illustrations are given.  相似文献   

3.
For the analysis of square contingency tables with ordered categories, Agresti (1988) introduced a model having the structure of uniform association plus a main-diagonal parameter. This paper extends that model. The extended model has the structure of uniform association plus two-diagonals-parameter, and it is a special case of the quasi-uniform association model introduced by Goodman (1979). The Danish occupational mobility table data are analyzed using the models introduced here.  相似文献   

4.
Based on ordered ranked set sample, Bayesian estimation of the model parameter as well as prediction of the unobserved data from Rayleigh distribution are studied. The Bayes estimates of the parameter involved are obtained using both squared error and asymmetric loss functions. The Bayesian prediction approach is considered for predicting the unobserved lifetimes based on a two-sample prediction problem. A real life dataset and simulation study are used to illustrate our procedures.  相似文献   

5.
Independent random samples (of possibly unequal sizes) are drawn from k (≥2) uniform populations having unknown scale parameters μ1,…,μk. The problem of componentwise estimation of ordered parameters is investigated. The loss function is assumed to be squared error and the cases of known and unknown ordering among μ1,…,μk. are dealt with separately. Sufficient conditions for an estimator to be inadmissible are provided and as a consequence, many natural estimators are shown to be inadmissible, Better estimators are provided.  相似文献   

6.
The objective of this paper is to investigate through simulation the possible presence of the incidental parameters problem when performing frequentist model discrimination with stratified data. In this context, model discrimination amounts to considering a structural parameter taking values in a finite space, with k points, k≥2. This setting seems to have not yet been considered in the literature about the Neyman–Scott phenomenon. Here we provide Monte Carlo evidence of the severity of the incidental parameters problem also in the model discrimination setting and propose a remedy for a special class of models. In particular, we focus on models that are scale families in each stratum. We consider traditional model selection procedures, such as the Akaike and Takeuchi information criteria, together with the best frequentist selection procedure based on maximization of the marginal likelihood induced by the maximal invariant, or of its Laplace approximation. Results of two Monte Carlo experiments indicate that when the sample size in each stratum is fixed and the number of strata increases, correct selection probabilities for traditional model selection criteria may approach zero, unlike what happens for model discrimination based on exact or approximate marginal likelihoods. Finally, two examples with real data sets are given.  相似文献   

7.
Previous time series applications of qualitative response models have ignored features of the data, such as conditional heteroscedasticity, that are routinely addressed in time series econometrics of financial data. This article addresses this issue by adding Markov-switching heteroscedasticity to a dynamic ordered probit model of discrete changes in the bank prime lending rate and estimating via the Gibbs sampler. The dynamic ordered probit model of Eichengreen, Watson, and Grossman allows for serial autocorrelation in probit analysis of a time series, and this article demonstrates the relative simplicity of estimating a dynamic ordered probit using the Gibbs sampler instead of the Eichengreen et al. maximum likelihood procedure. In addition, the extension to regime-switching parameters and conditional heteroscedasticity is easy to implement under Gibbs sampling. The article compares tests of goodness of fit between dynamic ordered probit models of the prime rate that have constant variance and conditional heteroscedasticity.  相似文献   

8.
Marginal changes of interacted variables and interaction terms in random parameters ordered response models are calculated incorrectly in econometric softwares. We derive the correct formulas for calculating these marginal changes. In our empirical example, we observe significant changes not only in the magnitude of the marginal effects but also in their standard errors, suggesting that the incorrect estimation of the marginal effects of these variables as is commonly practiced can render biased inferences on the findings.  相似文献   

9.
Exact null and alternative distributions of the two-way maximally selected x2 for interaction between the ordered rows and columns are derived for each of the normal and Poisson models, respectively. The method is one of the multiple comparison procedures for ordered parameters and is useful for defining a block interaction or a two-way change-point model as a simple alternative to the two-way additive model. The construction of a confidence region for the two-way change-point is then described. An important application is found in a dose-response clinical trial with ordered categorical responses, where detecting the dose level which gives significantly higher responses than the lower doses can be formulated as a problem of detecting a change in the interaction effects.  相似文献   

10.
This article considers fixed effects (FE) estimation for linear panel data models under possible model misspecification when both the number of individuals, n, and the number of time periods, T, are large. We first clarify the probability limit of the FE estimator and argue that this probability limit can be regarded as a pseudo-true parameter. We then establish the asymptotic distributional properties of the FE estimator around the pseudo-true parameter when n and T jointly go to infinity. Notably, we show that the FE estimator suffers from the incidental parameters bias of which the top order is O(T? 1), and even after the incidental parameters bias is completely removed, the rate of convergence of the FE estimator depends on the degree of model misspecification and is either (nT)? 1/2 or n? 1/2. Second, we establish asymptotically valid inference on the (pseudo-true) parameter. Specifically, we derive the asymptotic properties of the clustered covariance matrix (CCM) estimator and the cross-section bootstrap, and show that they are robust to model misspecification. This establishes a rigorous theoretical ground for the use of the CCM estimator and the cross-section bootstrap when model misspecification and the incidental parameters bias (in the coefficient estimate) are present. We conduct Monte Carlo simulations to evaluate the finite sample performance of the estimators and inference methods, together with a simple application to the unemployment dynamics in the U.S.  相似文献   

11.
There are many models that require the estimation of a set of ordered parameters. For example, multivariate analysis of variance often is formulated as testing for the equality of the parameters versus an ordered alternative. This problem, referred to as isotonic inference, constrained inference, or isotonic regression, has led to the development of general solutions, not often easy to apply in special models. In this expository paper, we study the special case of a separable convex quadratic programming problem for which the optimality conditions lead to a readily solved linear complementarity problem in the Lagrange multipliers, and subsequently to an equivalent linear programming problem, whose solution can be used to recover the solution of the original isotonic problem. The method can be applied to estimating ordered correlations, ordered binomial probabilities, ordered Poisson parameters, ordered exponential scale parameters, or ordered risk differences.  相似文献   

12.
This article assumes the goal of proposing a simulation-based theoretical model comparison methodology with application to two time series road accident models. The model comparison exercise helps to quantify the main differences and similarities between the two models and comprises of three main stages: (1) simulation of time series through a true model with predefined properties; (2) estimation of the alternative model using the simulated data; (3) sensitivity analysis to quantify the effect of changes in the true model parameters on alternative model parameter estimates through analysis of variance, ANOVA. The proposed methodology is applied to two time series road accident models: UCM (unobserved components model) and DRAG (Demand for Road Use, Accidents and their Severity). Assuming that the real data-generating process is the UCM, new datasets approximating the road accident data are generated, and DRAG models are estimated using the simulated data. Since these two methodologies are usually assumed to be equivalent, in a sense that both models accurately capture the true effects of the regressors, we are specifically addressing the modeling of the stochastic trend, through the alternative model. Stochastic trend is the time-varying component and is one of the crucial factors in time series road accident data. Theoretically, it can be easily modeled through UCM, given its modeling properties. However, properly capturing the effect of a non-stationary component such as stochastic trend in a stationary explanatory model such as DRAG is challenging. After obtaining the parameter estimates of the alternative model (DRAG), the estimates of both true and alternative models are compared and the differences are quantified through experimental design and ANOVA techniques. It is observed that the effects of the explanatory variables used in the UCM simulation are only partially captured by the respective DRAG coefficients. This a priori, could be due to multicollinearity but the results of both simulation of UCM data and estimating of DRAG models reveal that there is no significant static correlation among regressors. Moreover, in fact, using ANOVA, it is determined that this regression coefficient estimation bias is caused by the presence of the stochastic trend present in the simulated data. Thus, the results of the methodological development suggest that the stochastic component present in the data should be treated accordingly through a preliminary, exploratory data analysis.  相似文献   

13.
This paper discusses the specific problems of age-period-cohort (A-P-C) analysis within the general framework of interaction assessment for two-way cross-classified data with one observation per cell. The A-P-C multiple classification model containing the effects of age groups (rows), periods of observation (columns), and birth cohorts (diagonals of the two-way table) is characterized as one of a special class of models involving interaction terms assumed to have very specific forms. The so-called A-P-C identification problem, which results from the use of a particular interaction structure for detecting cohort effects, is shown to manifest itself in the form of an exact linear dependency among the columns of the design matrix. The precise relationship holding among these columns is derived, as is an explicit formula for the bias in the parameter estimates resulting from an incorrect specification of an assumed restriction on the parameters required to solve the normal equations. Current methods for modeling A-P-C data are critically reviewed, an illustrative numerical example is presented, and one potentially promising analysis strategy is discussed. However, gien the large number of possible sources for error in A-P-C analyses, it is strongly recommended that the results of such analyses be interpreted with a great deal of caution.  相似文献   

14.
A test procedure for testing homogeneity of location parameters against simple ordered alternative is proposed for k(k ≥ 2) members of two parameter exponential distribution under unbalanced data and heteroscedasticity of the scale parameters. The relevant one-sided and two-sided simultaneous confidence intervals (SCIs) for all k(k ? 1)/2 ordered pairwise differences of location parameters are also proposed. Simulation-based study revealed that the proposed procedure is better than the recently proposed procedure in terms of power, coverage probability, and average volume of SCIs. The implementation of proposed procedure is demonstrated through real life data.  相似文献   

15.
This short article extends well-known threshold models to the ordered response setting. We consider the case where the sample is endogenously split to estimate regime-dependent coefficients for one variable of interest, while keeping the other coefficients and auxiliary parameters constant across the threshold. We use Monte Carlo methods to examine the behavior of the model. In addition, we derive the formulae for the partial effects associated with the model. We apply our threshold model to the relationship between income and self-reported happiness using data drawn from the U.S. General Social Survey. While the findings suggest the presence of a threshold in the income-happiness gradient at approximately U.S. $76,000, no evidence is found in support of a satiation point. Supplementary materials for this article are available online.  相似文献   

16.
The model parameters of linear state space models are typically estimated with maximum likelihood estimation, where the likelihood is computed analytically with the Kalman filter. Outliers can deteriorate the estimation. Therefore we propose an alternative estimation method. The Kalman filter is replaced by a robust version and the maximum likelihood estimator is robustified as well. The performance of the robust estimator is investigated in a simulation study. Robust estimation of time varying parameter regression models is considered as a special case. Finally, the methodology is applied to real data.  相似文献   

17.
Nonparametric methods in factorial designs   总被引:1,自引:0,他引:1  
Summary In this paper, we summarize some recent developments in the analysis of nonparametric models where the classical models of ANOVA are generalized in such a way that not only the assumption of normality is relaxed but also the structure of the designs is introduced in a broader framework and also the concept of treatment effects is redefined. The continuity of the distribution functions is not assumed so that not only data from continuous distributions but also data with ties are included in this general setup. In designs with independent observations as well as in repeated measures designs, the hypotheses are formulated by means of the distribution functions. The main results are given in a unified form. Some applications to special designs are considered, where in simple designs, some well known statistics (such as the Kruskal-Wallis statistic and the χ2-statistic for dichotomous data) come out as special cases. The general framework presented here enables the nonparametric analysis of data with continuous distribution functions as well as arbitrary discrete data such as count data, ordered categorical and dichotomous data. Received: October 13, 1999; revised version: June 26, 2000  相似文献   

18.
ABSTRACT

Estimation of common location parameter of two exponential populations is considered when the scale parameters are ordered using type-II censored samples. A general inadmissibility result is proved which helps in deriving improved estimators. Further, a class of estimators dominating the MLE has been derived by an application of integrated expression of risk difference (IERD) approach of Kubokawa. A discussion regarding extending the results to a general k( ? 2) populations has been done. Finally, all the proposed estimators are compared through simulation.  相似文献   

19.
Models with large parameter (i.e., hundreds or thousands of parameters) often behave as if they depend upon only a few parameters, with the rest having comparatively little influence. One challenge of sensitivity analysis with such models is screening parameters to identify the influential ones, and then characterizing their influences.

Large models often require significant computing resources to evaluate their output, and so a good screening mechanism should be efficient: it should minimize the number of times a model must be exercised. This paper describes an efficient procedure to perform sensitivity analysis on deterministic models with specified ranges or probability distributions for each parameter.

It is based on repeated exercising of the model, which can be treated as a black box. Statistical checks can ensure that the screening identified parameters that account for the bulk of the model variation. Subsequent sensitivity analysis can use the screening information to reduce the investment required to characterize the influence of influential and other parameters.

The procedure exploits simplifications in the dependence of a model output on model inputs. It works best where a small number of parameters are much more influential than all the rest. The method is much more sensitive to the number of influential parameters than to the total number of parameters. It is most effective when linear or quadratic effects dominate higher order effects and complex interactions.

The paper presents a set of M athematica functions that can be used to create a variety of types of experimental designs useful for sensitivity analysis, including simple random, latin hypercube and fractional factorial sampling. Each sampling method can use discretization, folding, grouping and replication to create composite designs. These techniques have beencombined in a composite approach called Iterated Fractional Factorial Design (IFFD).

The procedure is applied to model of nuclear fuel waste disposal, and to simplified example models to demonstrate the concepts involved.  相似文献   

20.
For square contingency tables rith ordered categories, this paper proposes two kinds of extensions of marginal homogeneity model and gives decompositions for the Liseer diagonals-parameter symmetry model considered by Agresti (1983a) using the proposed models- The proposed models are also applied to an unaided vision data.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号