首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
We propose inference procedures for partially identified population features for which the population identification region can be written as a transformation of the Aumann expectation of a properly defined set valued random variable (SVRV). An SVRV is a mapping that associates a set (rather than a real number) with each element of the sample space. Examples of population features in this class include interval‐identified scalar parameters, best linear predictors with interval outcome data, and parameters of semiparametric binary models with interval regressor data. We extend the analogy principle to SVRVs and show that the sample analog estimator of the population identification region is given by a transformation of a Minkowski average of SVRVs. Using the results of the mathematics literature on SVRVs, we show that this estimator converges in probability to the population identification region with respect to the Hausdorff distance. We then show that the Hausdorff distance and the directed Hausdorff distance between the population identification region and the estimator, when properly normalized by , converge in distribution to functions of a Gaussian process whose covariance kernel depends on parameters of the population identification region. We provide consistent bootstrap procedures to approximate these limiting distributions. Using similar arguments as those applied for vector valued random variables, we develop a methodology to test assumptions about the true identification region and its subsets. We show that these results can be used to construct a confidence collection and a directed confidence collection. Those are (respectively) collection of sets that, when specified as a null hypothesis for the true value (a subset of values) of the population identification region, cannot be rejected by our tests.  相似文献   

2.
This paper shows that the problem of testing hypotheses in moment condition models without any assumptions about identification may be considered as a problem of testing with an infinite‐dimensional nuisance parameter. We introduce a sufficient statistic for this nuisance parameter in a Gaussian problem and propose conditional tests. These conditional tests have uniformly correct asymptotic size for a large class of models and test statistics. We apply our approach to construct tests based on quasi‐likelihood ratio statistics, which we show are efficient in strongly identified models and perform well relative to existing alternatives in two examples.  相似文献   

3.
This paper examines three distinct hypothesis testing problems that arise in the context of identification of some nonparametric models with endogeneity. The first hypothesis testing problem we study concerns testing necessary conditions for identification in some nonparametric models with endogeneity involving mean independence restrictions. These conditions are typically referred to as completeness conditions. The second and third hypothesis testing problems we examine concern testing for identification directly in some nonparametric models with endogeneity involving quantile independence restrictions. For each of these hypothesis testing problems, we provide conditions under which any test will have power no greater than size against any alternative. In this sense, we conclude that no nontrivial tests for these hypothesis testing problems exist.  相似文献   

4.
This paper develops a framework for performing estimation and inference in econometric models with partial identification, focusing particularly on models characterized by moment inequalities and equalities. Applications of this framework include the analysis of game‐theoretic models, revealed preference restrictions, regressions with missing and corrupted data, auction models, structural quantile regressions, and asset pricing models. Specifically, we provide estimators and confidence regions for the set of minimizers ΘI of an econometric criterion function Q(θ). In applications, the criterion function embodies testable restrictions on economic models. A parameter value θthat describes an economic model satisfies these restrictions if Q(θ) attains its minimum at this value. Interest therefore focuses on the set of minimizers, called the identified set. We use the inversion of the sample analog, Qn(θ), of the population criterion, Q(θ), to construct estimators and confidence regions for the identified set, and develop consistency, rates of convergence, and inference results for these estimators and regions. To derive these results, we develop methods for analyzing the asymptotic properties of sample criterion functions under set identification.  相似文献   

5.
We propose a novel technique to boost the power of testing a high‐dimensional vector H : θ = 0 against sparse alternatives where the null hypothesis is violated by only a few components. Existing tests based on quadratic forms such as the Wald statistic often suffer from low powers due to the accumulation of errors in estimating high‐dimensional parameters. More powerful tests for sparse alternatives such as thresholding and extreme value tests, on the other hand, require either stringent conditions or bootstrap to derive the null distribution and often suffer from size distortions due to the slow convergence. Based on a screening technique, we introduce a “power enhancement component,” which is zero under the null hypothesis with high probability, but diverges quickly under sparse alternatives. The proposed test statistic combines the power enhancement component with an asymptotically pivotal statistic, and strengthens the power under sparse alternatives. The null distribution does not require stringent regularity conditions, and is completely determined by that of the pivotal statistic. The proposed methods are then applied to testing the factor pricing models and validating the cross‐sectional independence in panel data models.  相似文献   

6.
We propose a generalized method of moments (GMM) Lagrange multiplier statistic, i.e., the K statistic, that uses a Jacobian estimator based on the continuous updating estimator that is asymptotically uncorrelated with the sample average of the moments. Its asymptotic χ2 distribution therefore holds under a wider set of circumstances, like weak instruments, than the standard full rank case for the expected Jacobian under which the asymptotic χ2 distributions of the traditional statistics are valid. The behavior of the K statistic can be spurious around inflection points and maxima of the objective function. This inadequacy is overcome by combining the K statistic with a statistic that tests the validity of the moment equations and by an extension of Moreira's (2003) conditional likelihood ratio statistic toward GMM. We conduct a power comparison to test for the risk aversion parameter in a stochastic discount factor model and construct its confidence set for observed consumption growth and asset return series.  相似文献   

7.
In this paper we investigate methods for testing the existence of a cointegration relationship among the components of a nonstationary fractionally integrated (NFI) vector time series. Our framework generalizes previous studies restricted to unit root integrated processes and permits simultaneous analysis of spurious and cointegrated NFI vectors. We propose a modified F‐statistic, based on a particular studentization, which converges weakly under both hypotheses, despite the fact that OLS estimates are only consistent under cointegration. This statistic leads to a Wald‐type test of cointegration when combined with a narrow band GLS‐type estimate. Our semiparametric methodology allows consistent testing of the spurious regression hypothesis against the alternative of fractional cointegration without prior knowledge on the memory of the original series, their short run properties, the cointegrating vector, or the degree of cointegration. This semiparametric aspect of the modelization does not lead to an asymptotic loss of power, permitting the Wald statistic to diverge faster under the alternative of cointegration than when testing for a hypothesized cointegration vector. In our simulations we show that the method has comparable power to customary procedures under the unit root cointegration setup, and maintains good properties in a general framework where other methods may fail. We illustrate our method testing the cointegration hypothesis of nominal GNP and simple‐sum (M1, M2, M3) monetary aggregates.  相似文献   

8.
Multivariate designs have been useful in testing models with more than one output variable. This paper explains the use of the step-down F statistic, a technique that allows a more refined analysis than does the traditional multivariate or univariate F test by examining sequential interactions among dependent variables in a multiple-response set. An illustration is presented within the context of a laboratory experiment designed to examine a manipulation effect on a three-variable causal chain. The example illustrates the misleading interpretation that could result without the information provided by the step-down statistic.  相似文献   

9.
This paper studies the behavior, under local misspecification, of several confidence sets (CSs) commonly used in the literature on inference in moment (in)equality models. We propose the amount of asymptotic confidence size distortion as a criterion to choose among competing inference methods. This criterion is then applied to compare across test statistics and critical values employed in the construction of CSs. We find two important results under weak assumptions. First, we show that CSs based on subsampling and generalized moment selection (Andrews and Soares (2010)) suffer from the same degree of asymptotic confidence size distortion, despite the fact that asymptotically the latter can lead to CSs with strictly smaller expected volume under correct model specification. Second, we show that the asymptotic confidence size of CSs based on the quasi‐likelihood ratio test statistic can be an arbitrary small fraction of the asymptotic confidence size of CSs based on the modified method of moments test statistic.  相似文献   

10.
This paper provides positive testability results for the identification condition in a nonparametric instrumental variable model, known as completeness, and it links the outcome of the test to properties of an estimator of the structural function. In particular, I show that the data can provide empirical evidence in favor of both an arbitrarily small identified set as well as an arbitrarily small asymptotic bias of the estimator. This is the case for a large class of complete distributions as well as certain incomplete distributions. As a byproduct, the results can be used to estimate an upper bound of the diameter of the identified set and to obtain an easy to report estimator of the identified set itself.  相似文献   

11.
In acute toxicity testing, organisms are continuously exposed to progressively increasing concentrations of a chemical and deaths of test organisms are recorded at several selected times. The results of the test are traditionally summarized by a dose-response curve, and the time course of effect is usually ignored for lack of a suitable model. A model which integrates the combined effects of dose and exposure duration on response is derived from the biological mechanisms of aquatic toxicity, and a statistically efficient approach for estimating acute toxicity by fitting the proposed model is developed in this paper. The proposed procedure has been computerized as software and a typical data set is used to illustrate the theory and procedure. The new statistical technique is also tested by a data base of a variety of chemical and fish species.  相似文献   

12.
This paper is concerned with tests and confidence intervals for parameters that are not necessarily point identified and are defined by moment inequalities. In the literature, different test statistics, critical‐value methods, and implementation methods (i.e., the asymptotic distribution versus the bootstrap) have been proposed. In this paper, we compare these methods. We provide a recommended test statistic, moment selection critical value, and implementation method. We provide data‐dependent procedures for choosing the key moment selection tuning parameter κ and a size‐correction factor η.  相似文献   

13.
Plural form tends to be the most popular organization form in retail and service networks compared to purely franchised or purely company-owned systems. In the first part, this paper exposes the evolution of researchers’ state of mind from the way of thinking which considers franchising and ownership as substitutable organizational forms to theories which analyze the utilization of both franchise and company arrangements. The paper describes the main attempts to explain theoretically the superiority of plural forms. In the second part, the paper discusses the hypothesis which says that there is a relationship between the organizational form of the chain and its efficiency score. It is demonstrated through the application of a data envelopment analysis method on French hotel chains that plural form networks are in average more efficient than strictly franchised and wholly owned chains. The Kruskal–Wallis test which is a distribution-free rank-order statistic is used to statistically verify this relationship. The result does not permit the rejection of the null hypothesis regarding whether an organizational form is more efficient than another one. Hence, this paper opens prospects for researches aiming at testing the organizational form effect on different samples and with other methods.  相似文献   

14.
采用不同的随机过程模型描述标的资产的价格动态,会极大的影响衍生品定价和风险管理活动。在文献中,同一资产采用的随机过程往往是不一致甚至是矛盾的。本文以GBM过程与OU过程为例,提出了一种统计推断方法,旨在从多个备选模型中选出能更好的描述标的资产价格动态的随机过程。该方法应用事后检验原理,将数据分成估计窗和检验窗,估计窗用来估计随机过程的参数,然后在模型参数不变的假定下,推导了原假设成立时检验窗各个时点的资产价格的样本外分布,看实际数据落在接受域或拒绝域的频数来判断是否接受原假设。本文以大宗商品、汇率、利率、股票作为标的资产,对随机过程选择进行了实证分析。实证结果表明,一些经常使用的随机过程模型并不一定是最优的模型。  相似文献   

15.
We consider tests of a simple null hypothesis on a subset of the coefficients of the exogenous and endogenous regressors in a single‐equation linear instrumental variables regression model with potentially weak identification. Existing methods of subset inference (i) rely on the assumption that the parameters not under test are strongly identified, or (ii) are based on projection‐type arguments. We show that, under homoskedasticity, the subset Anderson and Rubin (1949) test that replaces unknown parameters by limited information maximum likelihood estimates has correct asymptotic size without imposing additional identification assumptions, but that the corresponding subset Lagrange multiplier test is size distorted asymptotically.  相似文献   

16.
A large‐sample approximation of the posterior distribution of partially identified structural parameters is derived for models that can be indexed by an identifiable finite‐dimensional reduced‐form parameter vector. It is used to analyze the differences between Bayesian credible sets and frequentist confidence sets. We define a plug‐in estimator of the identified set and show that asymptotically Bayesian highest‐posterior‐density sets exclude parts of the estimated identified set, whereas it is well known that frequentist confidence sets extend beyond the boundaries of the estimated identified set. We recommend reporting estimates of the identified set and information about the conditional prior along with Bayesian credible sets. A numerical illustration for a two‐player entry game is provided.  相似文献   

17.
Single equation instrumental variable models for discrete outcomes are shown to be set identifying, not point identifying, for the structural functions that deliver the values of the discrete outcome. Bounds on identified sets are derived for a general nonparametric model and sharp set identification is demonstrated in the binary outcome case. Point identification is typically not achieved by imposing parametric restrictions. The extent of an identified set varies with the strength and support of instruments, and typically shrinks as the support of a discrete outcome grows. The paper extends the analysis of structural quantile functions with endogenous arguments to cases in which there are discrete outcomes.  相似文献   

18.
This paper considers the problem of testing a finite number of moment inequalities. We propose a two‐step approach. In the first step, a confidence region for the moments is constructed. In the second step, this set is used to provide information about which moments are “negative.” A Bonferonni‐type correction is used to account for the fact that, with some probability, the moments may not lie in the confidence region. It is shown that the test controls size uniformly over a large class of distributions for the observed data. An important feature of the proposal is that it remains computationally feasible, even when the number of moments is large. The finite‐sample properties of the procedure are examined via a simulation study, which demonstrates, among other things, that the proposal remains competitive with existing procedures while being computationally more attractive.  相似文献   

19.
Interspecies Extrapolation: A Reexamination of Acute Toxicity Data   总被引:2,自引:0,他引:2  
We reanalyze the acute toxicity data on cancer chemotherapeutic agents compiled by Freireich et al.(1) and Schein et al.(2) to derive coefficients of the allometric equation for scaling toxic doses across species (toxic dose = a.[body weight]b). In doing so, we extend the analysis of Travis and White (Risk Analysis, 1988, 8, 119-125) by addressing uncertainties inherent in the analysis and by including the hamster data, previously not used. Through Monte Carlo sampling, we specifically account for measurement errors when deriving confidence intervals and testing hypotheses. Two hypotheses are considered: first, that the allometric scaling power (b) varies for chemicals of the type studied; second, that the same scaling power, or "scaling law," holds for all chemicals in the data set. Following the first hypothesis, in 95% of the cases the allometric power of body weight falls in the range from 0.42-0.97, with a population mean of 0.74. Assuming the second hypothesis to be true-that the same scaling law is followed for all chemicals-the maximum likelihood estimate of the scaling power is 0.74; confidence bounds on the mean depend on the size of measurement error assumed. Under a "best case" analysis, 95% confidence bounds on the mean are 0.71 and 0.77, similar to the results reported by Travis and White. For alternative assumptions regarding measurement error, the confidence intervals are larger and include 0.67, but not 1.00. Although a scaling power of about 0.75 provides the best fit to the data as a whole, a scaling power of 0.67, corresponding to scaling per unit surface area, is not rejected when the nonhomogeneity of variances is taken into account. Hence, both surface area and 0.75 power scaling are consistent with the Freireich et al. and Schein et al. data sets. To illustrate the potential impact of overestimating the scaling power, we compare reported human MTDs to values extrapolated from mouse LD10s.  相似文献   

20.
Nonseparable panel models are important in a variety of economic settings, including discrete choice. This paper gives identification and estimation results for nonseparable models under time‐homogeneity conditions that are like “time is randomly assigned” or “time is an instrument.” Partial‐identification results for average and quantile effects are given for discrete regressors, under static or dynamic conditions, in fully nonparametric and in semiparametric models, with time effects. It is shown that the usual, linear, fixed‐effects estimator is not a consistent estimator of the identified average effect, and a consistent estimator is given. A simple estimator of identified quantile treatment effects is given, providing a solution to the important problem of estimating quantile treatment effects from panel data. Bounds for overall effects in static and dynamic models are given. The dynamic bounds provide a partial‐identification solution to the important problem of estimating the effect of state dependence in the presence of unobserved heterogeneity. The impact of T, the number of time periods, is shown by deriving shrinkage rates for the identified set as T grows. We also consider semiparametric, discrete‐choice models and find that semiparametric panel bounds can be much tighter than nonparametric bounds. Computationally convenient methods for semiparametric models are presented. We propose a novel inference method that applies in panel data and other settings and show that it produces uniformly valid confidence regions in large samples. We give empirical illustrations.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号