首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
This paper develops the fixed‐smoothing asymptotics in a two‐step generalized method of moments (GMM) framework. Under this type of asymptotics, the weighting matrix in the second‐step GMM criterion function converges weakly to a random matrix and the two‐step GMM estimator is asymptotically mixed normal. Nevertheless, the Wald statistic, the GMM criterion function statistic, and the Lagrange multiplier statistic remain asymptotically pivotal. It is shown that critical values from the fixed‐smoothing asymptotic distribution are high order correct under the conventional increasing‐smoothing asymptotics. When an orthonormal series covariance estimator is used, the critical values can be approximated very well by the quantiles of a noncentral F distribution. A simulation study shows that statistical tests based on the new fixed‐smoothing approximation are much more accurate in size than existing tests.  相似文献   

2.
In this paper we investigate methods for testing the existence of a cointegration relationship among the components of a nonstationary fractionally integrated (NFI) vector time series. Our framework generalizes previous studies restricted to unit root integrated processes and permits simultaneous analysis of spurious and cointegrated NFI vectors. We propose a modified F‐statistic, based on a particular studentization, which converges weakly under both hypotheses, despite the fact that OLS estimates are only consistent under cointegration. This statistic leads to a Wald‐type test of cointegration when combined with a narrow band GLS‐type estimate. Our semiparametric methodology allows consistent testing of the spurious regression hypothesis against the alternative of fractional cointegration without prior knowledge on the memory of the original series, their short run properties, the cointegrating vector, or the degree of cointegration. This semiparametric aspect of the modelization does not lead to an asymptotic loss of power, permitting the Wald statistic to diverge faster under the alternative of cointegration than when testing for a hypothesized cointegration vector. In our simulations we show that the method has comparable power to customary procedures under the unit root cointegration setup, and maintains good properties in a general framework where other methods may fail. We illustrate our method testing the cointegration hypothesis of nominal GNP and simple‐sum (M1, M2, M3) monetary aggregates.  相似文献   

3.
We propose a novel technique to boost the power of testing a high‐dimensional vector H : θ = 0 against sparse alternatives where the null hypothesis is violated by only a few components. Existing tests based on quadratic forms such as the Wald statistic often suffer from low powers due to the accumulation of errors in estimating high‐dimensional parameters. More powerful tests for sparse alternatives such as thresholding and extreme value tests, on the other hand, require either stringent conditions or bootstrap to derive the null distribution and often suffer from size distortions due to the slow convergence. Based on a screening technique, we introduce a “power enhancement component,” which is zero under the null hypothesis with high probability, but diverges quickly under sparse alternatives. The proposed test statistic combines the power enhancement component with an asymptotically pivotal statistic, and strengthens the power under sparse alternatives. The null distribution does not require stringent regularity conditions, and is completely determined by that of the pivotal statistic. The proposed methods are then applied to testing the factor pricing models and validating the cross‐sectional independence in panel data models.  相似文献   

4.
In acute toxicity testing, organisms are continuously exposed to progressively increasing concentrations of a chemical and deaths of test organisms are recorded at several selected times. The results of the test are traditionally summarized by a dose-response curve, and the time course of effect is usually ignored for lack of a suitable model. A model which integrates the combined effects of dose and exposure duration on response is derived from the biological mechanisms of aquatic toxicity, and a statistically efficient approach for estimating acute toxicity by fitting the proposed model is developed in this paper. The proposed procedure has been computerized as software and a typical data set is used to illustrate the theory and procedure. The new statistical technique is also tested by a data base of a variety of chemical and fish species.  相似文献   

5.
This paper considers inference on functionals of semi/nonparametric conditional moment restrictions with possibly nonsmooth generalized residuals, which include all of the (nonlinear) nonparametric instrumental variables (IV) as special cases. These models are often ill‐posed and hence it is difficult to verify whether a (possibly nonlinear) functional is root‐n estimable or not. We provide computationally simple, unified inference procedures that are asymptotically valid regardless of whether a functional is root‐n estimable or not. We establish the following new useful results: (1) the asymptotic normality of a plug‐in penalized sieve minimum distance (PSMD) estimator of a (possibly nonlinear) functional; (2) the consistency of simple sieve variance estimators for the plug‐in PSMD estimator, and hence the asymptotic chi‐square distribution of the sieve Wald statistic; (3) the asymptotic chi‐square distribution of an optimally weighted sieve quasi likelihood ratio (QLR) test under the null hypothesis; (4) the asymptotic tight distribution of a non‐optimally weighted sieve QLR statistic under the null; (5) the consistency of generalized residual bootstrap sieve Wald and QLR tests; (6) local power properties of sieve Wald and QLR tests and of their bootstrap versions; (7) asymptotic properties of sieve Wald and SQLR for functionals of increasing dimension. Simulation studies and an empirical illustration of a nonparametric quantile IV regression are presented.  相似文献   

6.
In this paper we revisit the results in Caner and Hansen (2001), where the authors obtained novel limiting distributions of Wald type test statistics for testing for the presence of threshold nonlinearities in autoregressive models containing unit roots. Using the same framework, we obtain a new formulation of the limiting distribution of the Wald statistic for testing for threshold effects, correcting an expression that appeared in the main theorem presented by Caner and Hansen. Subsequently, we show that under a particular scenario that excludes stationary regressors such as lagged dependent variables and despite the presence of a unit root, this same limiting random variable takes a familiar form that is free of nuisance parameters and already tabulated in the literature, thus removing the need to use bootstrap based inferences. This is a novel and unusual occurrence in this literature on testing for the presence of nonlinear dynamics.  相似文献   

7.
二分群体决策规则的序性质研究   总被引:2,自引:0,他引:2       下载免费PDF全文
李武 《管理科学》2005,8(5):10-14
Karotkin等发现二分群体决策的加权多数决策规则集具有序性质,但未能解释其原因.文章提出了规则链和规则距离函数的概念,指出当一组决策规则构成规则链时这组规则便具有序性质,从而解释了这一现象.而判断一组规则是否构成规则链则可以通过计算各规则间的规则距离来实现.随后通过对具体实例的分析进一步阐述了得到的结论.  相似文献   

8.
We propose a novel statistic for conducting joint tests on all the structural parameters in instrumental variables regression. The statistic is straightforward to compute and equals a quadratic form of the score of the concentrated log–likelihood. It therefore attains its minimal value equal to zero at the maximum likelihood estimator. The statistic has a χ2 limiting distribution with a degrees of freedom parameter equal to the number of structural parameters. The limiting distribution does not depend on nuisance parameters. The statistic overcomes the deficiencies of the Anderson–Rubin statistic, whose limiting distribution has a degrees of freedom parameter equal to the number of instruments, and the likelihood based, Wald, likelihood ratio, and Lagrange multiplier statistics, whose limiting distributions depend on nuisance parameters. Size and power comparisons reveal that the statistic is a (asymptotic) size–corrected likelihood ratio statistic. We apply the statistic to the Angrist–Krueger (1991) data and find similar results as in Staiger and Stock (1997).  相似文献   

9.
Neural network techniques are widely used in solving pattern recognition or classification problems. However, when statistical data are used in supervised training of a neural network employing the back-propagation least mean square algorithm, the behavior of the classification boundary during training is often unpredictable. This research suggests the application of monotonicity constraints to the back propagation learning algorithm. When the training sample set is preprocessed by a linear classification function, neural network performance and efficiency can be improved in classification applications where the feature vector is related monotonically to the pattern vector. Since most classification problems in business possess monotonic properties, this technique is useful in those problems where any assumptions about the properties of the data are inappropriate.  相似文献   

10.
《Risk analysis》2018,38(4):653-665
Border inspection, and the challenge of deciding which of the tens of millions of consignments that arrive should be inspected, is a perennial problem for regulatory authorities. The objective of these inspections is to minimize the risk of contraband entering the country. As an example, for regulatory authorities in charge of biosecurity material, consignments of goods are classified before arrival according to their economic tariff number. This classification, perhaps along with other information, is used as a screening step to determine whether further biosecurity intervention, such as inspection, is necessary. Other information associated with consignments includes details such as the country of origin, supplier, and importer, for example. The choice of which consignments to inspect has typically been informed by historical records of intercepted material. Fortunately for regulators, interception is a rare event; however, this sparsity undermines the utility of historical records for deciding which containers to inspect. In this article, we report on an analysis that uses more detailed information to inform inspection. Using quarantine biosecurity as a case study, we create statistical profiles using generalized linear mixed models and compare different model specifications with historical information alone, demonstrating the utility of a statistical modeling approach. We also demonstrate some graphical model summaries that provide managers with insight into pathway governance.  相似文献   

11.
A number of recent studies have compared the performance of neural networks (NNs) to a variety of statistical techniques for the classification problem in discriminant analysis. The empirical results of these comparative studies indicate that while NNs often outperform the more traditional statistical approaches to classification, this is not always the case. Thus, decision makers interested in solving classification problems are left in a quandary as to what tool to use on a particular data set. We present a new approach to solving classification problems by combining the predictions of a well-known statistical tool with those of an NN to create composite predictions that are more accurate than either of the individual techniques used in isolation.  相似文献   

12.
《Omega》2001,29(1):1-18
A new use of the nonparametric statistic, referred to as the “Kruskal and Wallis rank test”, is proposed in this study. The nonparametric statistic examines whether or not any frontier shift occurs among observed periods. To document its practicality, the proposed statistic is incorporated into the framework of Window Malmquist Analysis (WMA) that is structured by combining Data Envelopment Analysis (DEA) window analysis with the Malmquist index approach. As an important case study, this research applies the new technique to examine the performance of Japanese postal services from 1983 to 1997. Two policy implications are derived from the empirical study.  相似文献   

13.
The Campbell and Fiske criteria for assessing the construct validity of multitrait-multimethod (MTMM) matrices has had a long history of use. While various statistical techniques, including ANOVA, have attempted to provide rigor to the MTMM matrix design, numerous problems still remain unsolved. Part of the problem in using an MTMM matrix is the assumption of measurement independence. This study attempts to illustrate the misleading inferences that often occur from MTMM analysis when method variance overlap is not accurately assessed. Three questionnaires were designed that were not method independent. Traditional procedures for assessing MTMM matrices suggested the three scaling formats used were not burdened with unusual method variance. A reanalysis of the MTMM matrix employing a Confirmatory Factor Analysis technique illustrated that method variance was a problem. Finally, the need for studies that concentrate on the nature of method variance, its causes and effects, is discussed.  相似文献   

14.
There are numerous variable selection rules in classical discriminant analysis. These rules enable a researcher to distinguish significant variables from nonsignificant ones and thus provide a parsimonious classification model based solely on significant variables. Prominent among such rules are the forward and backward stepwise variable selection criteria employed in statistical software packages such as Statistical Package for the Social Sciences and BMDP Statistical Software. No such criterion currently exists for linear programming (LP) approaches to discriminant analysis. In this paper, a criterion is developed to distinguish significant from nonsignificant variables for use in LP models. This criterion is based on the “jackknife” methodology. Examples are presented to illustrate implementation of the proposed criterion.  相似文献   

15.
Altough the dual resource-constrained (DRC) system has been studied, the decision rule used to determine when workers are eligible for transfer largely has been ignored. Some earlier studies examined the impact of this rule [5] [12] [15] but did not include labor-transfer times in their models. Gunther [6] incorporated labour-transfer times into his model, but the model involved only one worker and two machines. No previous study has examined decision rules that initiate labor transfers based on labor needs (“pull” rules). Labor transfers always have been initiated based on lack of need (“push” rules). This study examines three “pull” variations of the “When” labor-assignment decision rule. It compares their performances to the performances of two “push” rules and a comparable machine-limited system. A nonparametric statistical test, Jonckheere's S statistic, is used to test for significance of the rankings of the rules: a robust parametric multiple-comparison statistical test, Tukey's B statistic, is used to test the differences. One “pull” and one “push” decision rule provide similar performances and top the rankings consistently. Decision rules for determining when labor should be transferred from one work area to another are valuable aids for managers. This especially is true for the ever-increasing number of managers operating in organizations that recognize the benefits of a cross-trained work force. Recently there has been much interest in cross-training workers, perhaps because one of the mechanisms used in just-in-time systems to handle unbalanced work loads is to have cross-trained workers who can be shifted as demand dictates [8]. If management is to take full advantage of a cross-trained work force, it need to know when to transfer workers.  相似文献   

16.
Samuel Eilon  RV Mallya 《Omega》1985,13(5):429-433
The conventional method of controlling inventories of relatively fast moving items in a store is based on an A B C classification of the stock items. An analysis is presented for the extension of this method to determine the number of categories that should be employed and the way in which the different items should be allocated to these categories. A case study is briefly described to illustrate the application of this methodology.  相似文献   

17.
张婷婷  贺昌政  肖进 《管理评论》2012,(6):83-87,123
在管理决策的制定中,分类已经成为一种十分重要的方法和技术。由于现实客户数据常常是不完整的,因此,研究不完整数据的客户分类问题具有重要意义。通过分析以往分类过程中对不完整数据的处理方法,提出了一种基于动态分类器集成选择的不完整数据分类方法DCES-ID。分别在UCI客户分类数据集以及某券商客户数据集上进行分类的实验和实证分析。结果表明,与已有的6种分类算法相比,DCES-ID算法具有更高的分类准确性及稳定性,能够更有效地进行客户分类。  相似文献   

18.
The purpose of this research is to show the usefulness of three relatively simple nonlinear classification techniques for policy-capturing research where linear models have typically been used. This study uses 480 cases to assess the decision-making process used by 24 experienced national bank examiners in classifying commercial loans as acceptable or questionable. The results from multiple discriminant analysis (a linear technique) are compared to those of chi-squared automatic interaction detector analysis (a search technique), log-linear analysis, and logit analysis. Results show that while the four techniques are equally accurate in predicting loan classification, chi-squared automatic interaction detector analysis (CHAID) and log-linear analysis enable the researcher to analyze the decision-making structure and examine the “human” variable within the decision-making process. Consequently, if the sole purpose of research is to predict the decision maker's decisions, then any one of the four techniques turns out to be equally useful. If, however, the purpose is to analyze the decision-making process as well as to predict decisions, then CHAID or log-linear techniques are more useful than linear model techniques.  相似文献   

19.
This paper develops an asymptotic theory of inference for an unrestricted two‐regime threshold autoregressive (TAR) model with an autoregressive unit root. We find that the asymptotic null distribution of Wald tests for a threshold are nonstandard and different from the stationary case, and suggest basing inference on a bootstrap approximation. We also study the asymptotic null distributions of tests for an autoregressive unit root, and find that they are nonstandard and dependent on the presence of a threshold effect. We propose both asymptotic and bootstrap‐based tests. These tests and distribution theory allow for the joint consideration of nonlinearity (thresholds) and nonstationary (unit roots). Our limit theory is based on a new set of tools that combine unit root asymptotics with empirical process methods. We work with a particular two‐parameter empirical process that converges weakly to a two‐parameter Brownian motion. Our limit distributions involve stochastic integrals with respect to this two‐parameter process. This theory is entirely new and may find applications in other contexts. We illustrate the methods with an application to the U.S. monthly unemployment rate. We find strong evidence of a threshold effect. The point estimates suggest that the threshold effect is in the short‐run dynamics, rather than in the dominate root. While the conventional ADF test for a unit root is insignificant, our TAR unit root tests are arguably significant. The evidence is quite strong that the unemployment rate is not a unit root process, and there is considerable evidence that the series is a stationary TAR process.  相似文献   

20.
在分类问题中,类别不平衡问题将引起分类器训练偏差,导致少数类样本诊断敏感性降低.马田系统是一种多元数据诊断和预测技术,它通过构建一个连续的测量尺度而非直接对训练样本进行学习,该性质有望不受数据分布的影响,克服分类不平衡问题.本文针对马田系统阈值计算缺陷和不平衡数据分类要求,研究一种概率阈值模型计算马田系统阈值;还针对马田系统的若干不足,采用优化模型替代正交表和信噪比筛选关键变量,并使用了一种全方位优化算法求解.通过对8个UCI数据集的实验分析表明,改进的马田系统不仅对不平衡数据有较好的分类效果,且能筛选关键变量,降维效果明显.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号