首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 187 毫秒
1.
Because the eight largest bank failures in United States history have occurred since 1973 [24], the development of early-warning problem-bank identification models is an important undertaking. It has been shown previously [3] [5] that M-estimator robust regression provides such a model. The present paper develops a similar model for the multivariate case using both a robustified Mahalanobis distance analysis [21] and principal components analysis [10]. In addition to providing a successful presumptive problem-bank identification model, combining the use of the M-estimator robust regression procedure and the robust Mahalanobis distance procedure with principal components analysis is also demonstrated to be a general method of outlier detection. The results from using these procedures are compared to some previously suggested procedures, and general conclusions are drawn.  相似文献   

2.
《Risk analysis》2018,38(10):2073-2086
The guidelines for setting environmental quality standards are increasingly based on probabilistic risk assessment due to a growing general awareness of the need for probabilistic procedures. One of the commonly used tools in probabilistic risk assessment is the species sensitivity distribution (SSD), which represents the proportion of species affected belonging to a biological assemblage as a function of exposure to a specific toxicant. Our focus is on the inverse use of the SSD curve with the aim of estimating the concentration, HCp, of a toxic compound that is hazardous to p% of the biological community under study. Toward this end, we propose the use of robust statistical methods in order to take into account the presence of outliers or apparent skew in the data, which may occur without any ecological basis. A robust approach exploits the full neighborhood of a parametric model, enabling the analyst to account for the typical real‐world deviations from ideal models. We examine two classic HCp estimation approaches and consider robust versions of these estimators. In addition, we also use data transformations in conjunction with robust estimation methods in case of heteroscedasticity. Different scenarios using real data sets as well as simulated data are presented in order to illustrate and compare the proposed approaches. These scenarios illustrate that the use of robust estimation methods enhances HCp estimation.  相似文献   

3.
This study presents a new robust estimation method that can produce a regression median hyper-plane for any data set. The robust method starts with dual variables obtained by least absolute value estimation. It then utilizes two specially designed goal programming models to obtain regression median estimators that are less sensitive to a small sample size and a skewed error distribution than least absolute value estimators. The superiority of new robust estimators over least absolute value estimators is confirmed by two illustrative data sets and a Monte Carlo simulation study.  相似文献   

4.
In many instances a model is based on nonexperimental data and is used for prediction. In this situation, using both the least-squares and the robust methods simultaneously can result in models with improved predictive performance.  相似文献   

5.
6.
Coordination efforts that access and align relevant cross‐functional expertise are regarded as an essential element of innovation success. In recent years, these efforts have been further augmented through complementary investments in information systems, which provide the technological platforms for information sharing and coordination across functional and organizational boundaries. Somewhat overlooked has been the critical mediating role of the intelligence gained through these efforts and capabilities. This study draws on the theory of complementarity to elaborate on the nature of this mediating concept. Theoretical predictions of the model are tested using instrument variable regression analysis of data collected from a sample of publicly traded US manufacturing firms. The findings suggest that the effects of both internal and external coordination on market intelligence and supply‐chain intelligence are moderated by the firm's information system capability. The effect of both types of intelligence quality on new product development performance was contingent with the effects being enhanced (attenuated) when the market conditions were dynamic (stable). The results are robust to common‐method bias, endogeneity concerns, and alternative estimation methods.  相似文献   

7.
利率期限结构模型估计结果影响因素经验研究   总被引:1,自引:1,他引:0  
本文首先将利率期限结构模型分成了四类,并总结了国内外利率期限结构模型的估计方法。作者利用中美利率数据证明了利率市场的有效性,估计方法和数值优化算法都会对模型估计结果产生影响。实证结果表明,利用所有市场利率数据的新估计方法得到的参数会更加准确,可以消除利率市场的套利机会;遗传算法的估计结果不太稳定,单纯形法估计结果对初始值比较敏感,而矩形分割法的估计结果最为稳健。  相似文献   

8.
Computation of typical statistical sample estimates such as the median or least squares fit usually require the solution of an unconstrained optimization problem with a convex objective function, that can be solved efficiently by various methods. The presence of outliers in the data dictates the computation of a robust estimate, which can be defined as the optimum statistical estimate for a subset that contains at least half of the observations. The resulting problem is now a combinatorial optimization problem which is often computationally intractable. Classical statistical methods for multivariate location \(\varvec{\mu }\) and scatter matrix \(\varvec{\varSigma }\) estimation are based on the sample mean vector and covariance matrix, which are very sensitive in the presence of outlier observations. We propose a new method for robust location and scatter estimation which is composed of two stages. In the first stage an unbiased multivariate \(L_{1}\)-median center for all the observations is attained by a novel procedure called the least trimmed Euclidean deviations estimator. This robust median defines a coverage set of observations which is used in the second stage to iteratively compute the set of outliers which violate the correlational structure of the data set. Extensive computational experiments indicate that the proposed method outperforms existing methods in accuracy, robustness and computational time.  相似文献   

9.
This paper is concerned with robust estimation under moment restrictions. A moment restriction model is semiparametric and distribution‐free; therefore it imposes mild assumptions. Yet it is reasonable to expect that the probability law of observations may have some deviations from the ideal distribution being modeled, due to various factors such as measurement errors. It is then sensible to seek an estimation procedure that is robust against slight perturbation in the probability measure that generates observations. This paper considers local deviations within shrinking topological neighborhoods to develop its large sample theory, so that both bias and variance matter asymptotically. The main result shows that there exists a computationally convenient estimator that achieves optimal minimax robust properties. It is semiparametrically efficient when the model assumption holds, and, at the same time, it enjoys desirable robust properties when it does not.  相似文献   

10.
附加噪声长记忆过程的参数估计在实证研究中一直受到回避.本文通过聚合算子对样本数据进行降噪处理,研究了局部Whittle(LW)估计和对数周期图(LP)回归两种半参数估计方法.结果表明LW估计方法相比LP回归,解决了半参数方法的参数选取问题,且能够完全忽略噪声数据的影响而得到一致的估计.将LW估计应用于中国股市,发现重大突发事件发生时的长记忆性表现得最为强烈,且事件后比事件前要强烈.  相似文献   

11.
The article proposes and investigates the performance of two Bayesian nonparametric estimation procedures in the context of benchmark dose estimation in toxicological animal experiments. The methodology is illustrated using several existing animal dose‐response data sets and is compared with traditional parametric methods available in standard benchmark dose estimation software (BMDS), as well as with a published model‐averaging approach and a frequentist nonparametric approach. These comparisons together with simulation studies suggest that the nonparametric methods provide a lot of flexibility in terms of model fit and can be a very useful tool in benchmark dose estimation studies, especially when standard parametric models fail to fit to the data adequately.  相似文献   

12.
I recently discussed pitfalls in attempted causal inference based on reduced‐form regression models. I used as motivation a real‐world example from a paper by Dr. Sneeringer, which interpreted a reduced‐form regression analysis as implying the startling causal conclusion that “doubling of [livestock] production leads to a 7.4% increase in infant mortality.” This conclusion is based on: (A) fitting a reduced‐form regression model to aggregate (e.g., county‐level) data; and (B) (mis)interpreting a regression coefficient in this model as a causal coefficient, without performing any formal statistical tests for potential causation (such as conditional independence, Granger‐Sims, or path analysis tests). Dr. Sneeringer now adds comments that confirm and augment these deficiencies, while advocating methodological errors that, I believe, risk analysts should avoid if they want to reach logically sound, empirically valid, conclusions about cause and effect. She explains that, in addition to (A) and (B) above, she also performed other steps such as (C) manually selecting specific models and variables and (D) assuming (again, without testing) that hand‐picked surrogate variables are valid (e.g., that log‐transformed income is an adequate surrogate for poverty). In her view, these added steps imply that “critiques of A and B are not applicable” to her analysis and that therefore “a causal argument can be made” for “such a strong, robust correlation” as she believes her regression coefficient indicates. However, multiple wrongs do not create a right. Steps (C) and (D) exacerbate the problem of unjustified causal interpretation of regression coefficients, without rendering irrelevant the fact that (A) and (B) do not provide evidence of causality. This reply focuses on whether any statistical techniques can produce the silk purse of a valid causal inference from the sow's ear of a reduced‐form regression analysis of ecological data. We conclude that Dr. Sneeringer's analysis provides no valid indication that air pollution from livestock operations causes any increase in infant mortality rates. More generally, reduced‐form regression modeling of aggregate population data—no matter how it is augmented by fitting multiple models and hand‐selecting variables and transformations—is not adequate for valid causal inference about health effects caused by specific, but unmeasured, exposures.  相似文献   

13.
Product-concept testing is a popular activity in marketing research. Often the number of new product/service concepts under study far exceeds the time available for any single respondent. Respondents therefore may receive only a subset of the concepts comprising the total design. Researchers are interested in making plausible imputations for the missing evaluations of any given respondent. This paper proposes a model and an iterative estimation procedure to impute missing entries for each evaluator. The model and the procedure incorporate (1) the internal structure of the response matrix and (2) an ancillary matrix of (nonmissing) respondent background data; they also (3) allow for individual differences in respondents' uses of the numerical rating scale. The model is applied to both real and synthetic data. Suggestions also are given on how the data imputations may be used in market segmentation and product-line decisions.  相似文献   

14.
The robustness of linear programming regression estimators is examined where the disturbance terms are normally distributed and there are observation errors in the explanatory variables. These errors are occasional gross biases between one set of observations and another. The simulation of short series data offers preliminary evidence that when these biases have a non-zero mean, MSAE estimation is more robust than least squares.  相似文献   

15.
Four discriminant models were compared in a simulation study: Fisher's linear discriminant function [14], Smith's quadratic discriminant function [34], the logistic discriminant model, and a model based on linear programming [17]. The study was conducted to estimate expected rates of misclassification for these four procedures when observations were sampled from a variety of normal and nonnormal distributions. In contrast to previous research, data were taken from four types of Kurtotic population distributions. The results indicate the four discriminant procedures are robust toward data from many types of distributions. The misclassification rates for both the logistic discriminant model and the formulation based on linear programming consistently decreased as the kurtosis in the data increased. The decreases, however, were of small magnitude. None of these procedures yielded statistically significant lower rates of misclassification under nonnormality. The quadratic discriminant function produced significantly lower error rates when the variances across groups were heterogeneous.  相似文献   

16.
Industrial robots are increasingly used by many manufacturing firms. The number of robot manufacturers has also increased with many of these firms now offering a wide range of models. A potential user is thus faced with many options in both performance and cost. This paper proposes a decision model for the robot selection problem. The proposed model uses robust regression to identify, based on manufacturers' specifications, the robots that are the better performers for a given cost. Robust regression is used because it identifies and is resistant to the effects of outlying observations, key components in the proposed model. The robots selected by the model become candidates for testing to verify manufacturers' specifications. The model is tested on a real data set and an example is presented.  相似文献   

17.
We examine challenges to estimation and inference when the objects of interest are nondifferentiable functionals of the underlying data distribution. This situation arises in a number of applications of bounds analysis and moment inequality models, and in recent work on estimating optimal dynamic treatment regimes. Drawing on earlier work relating differentiability to the existence of unbiased and regular estimators, we show that if the target object is not differentiable in the parameters of the data distribution, there exist no estimator sequences that are locally asymptotically unbiased or α‐quantile unbiased. This places strong limits on estimators, bias correction methods, and inference procedures, and provides motivation for considering other criteria for evaluating estimators and inference procedures, such as local asymptotic minimaxity and one‐sided quantile unbiasedness.  相似文献   

18.
We study inference in structural models with a jump in the conditional density, where location and size of the jump are described by regression curves. Two prominent examples are auction models, where the bid density jumps from zero to a positive value at the lowest cost, and equilibrium job‐search models, where the wage density jumps from one positive level to another at the reservation wage. General inference in such models remained a long‐standing, unresolved problem, primarily due to nonregularities and computational difficulties caused by discontinuous likelihood functions. This paper develops likelihood‐based estimation and inference methods for these models, focusing on optimal (Bayes) and maximum likelihood procedures. We derive convergence rates and distribution theory, and develop Bayes and Wald inference. We show that Bayes estimators and confidence intervals are attractive both theoretically and computationally, and that Bayes confidence intervals, based on posterior quantiles, provide a valid large sample inference method.  相似文献   

19.
Important estimation problems in econometrics like estimating the value of a spectral density at frequency zero, which appears in the econometrics literature in the guises of heteroskedasticity and autocorrelation consistent variance estimation and long run variance estimation, are shown to be “ill‐posed” estimation problems. A prototypical result obtained in the paper is that the minimax risk for estimating the value of the spectral density at frequency zero is infinite regardless of sample size, and that confidence sets are close to being uninformative. In this result the maximum risk is over commonly used specifications for the set of feasible data generating processes. The consequences for inference on unit roots and cointegration are discussed. Similar results for persistence estimation and estimation of the long memory parameter are given. All these results are obtained as special cases of a more general theory developed for abstract estimation problems, which readily also allows for the treatment of other ill‐posed estimation problems such as, e.g., nonparametric regression or density estimation.  相似文献   

20.
The delta method and continuous mapping theorem are among the most extensively used tools in asymptotic derivations in econometrics. Extensions of these methods are provided for sequences of functions that are commonly encountered in applications and where the usual methods sometimes fail. Important examples of failure arise in the use of simulation‐based estimation methods such as indirect inference. The paper explores the application of these methods to the indirect inference estimator (IIE) in first order autoregressive estimation. The IIE uses a binding function that is sample size dependent. Its limit theory relies on a sequence‐based delta method in the stationary case and a sequence‐based implicit continuous mapping theorem in unit root and local to unity cases. The new limit theory shows that the IIE achieves much more than (partial) bias correction. It changes the limit theory of the maximum likelihood estimator (MLE) when the autoregressive coefficient is in the locality of unity, reducing the bias and the variance of the MLE without affecting the limit theory of the MLE in the stationary case. Thus, in spite of the fact that the IIE is a continuously differentiable function of the MLE, the limit distribution of the IIE is not simply a scale multiple of the MLE, but depends implicitly on the full binding function mapping. The unit root case therefore represents an important example of the failure of the delta method and shows the need for an implicit mapping extension of the continuous mapping theorem.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号