首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This paper considers regression models for cross‐section data that exhibit cross‐section dependence due to common shocks, such as macroeconomic shocks. The paper analyzes the properties of least squares (LS) estimators in this context. The results of the paper allow for any form of cross‐section dependence and heterogeneity across population units. The probability limits of the LS estimators are determined, and necessary and sufficient conditions are given for consistency. The asymptotic distributions of the estimators are found to be mixed normal after recentering and scaling. The t, Wald, and F statistics are found to have asymptotic standard normal, χ2, and scaled χ2 distributions, respectively, under the null hypothesis when the conditions required for consistency of the parameter under test hold. However, the absolute values of t, Wald, and F statistics are found to diverge to infinity under the null hypothesis when these conditions fail. Confidence intervals exhibit similarly dichotomous behavior. Hence, common shocks are found to be innocuous in some circumstances, but quite problematic in others. Models with factor structures for errors and regressors are considered. Using the general results, conditions are determined under which consistency of the LS estimators holds and fails in models with factor structures. The results are extended to cover heterogeneous and functional factor structures in which common factors have different impacts on different population units.  相似文献   

2.
In this paper, we propose an instrumental variable approach to constructing confidence sets (CS's) for the true parameter in models defined by conditional moment inequalities/equalities. We show that by properly choosing instrument functions, one can transform conditional moment inequalities/equalities into unconditional ones without losing identification power. Based on the unconditional moment inequalities/equalities, we construct CS's by inverting Cramér–von Mises‐type or Kolmogorov–Smirnov‐type tests. Critical values are obtained using generalized moment selection (GMS) procedures. We show that the proposed CS's have correct uniform asymptotic coverage probabilities. New methods are required to establish these results because an infinite‐dimensional nuisance parameter affects the asymptotic distributions. We show that the tests considered are consistent against all fixed alternatives and typically have power against n−1/2‐local alternatives to some, but not all, sequences of distributions in the null hypothesis. Monte Carlo simulations for five different models show that the methods perform well in finite samples.  相似文献   

3.
We demonstrate the asymptotic equivalence between commonly used test statistics for out‐of‐sample forecasting performance and conventional Wald statistics. This equivalence greatly simplifies the computational burden of calculating recursive out‐of‐sample test statistics and their critical values. For the case with nested models, we show that the limit distribution, which has previously been expressed through stochastic integrals, has a simple representation in terms of χ2‐distributed random variables and we derive its density. We also generalize the limit theory to cover local alternatives and characterize the power properties of the test.  相似文献   

4.
This paper proposes a structural nonequilibrium model of initial responses to incomplete‐information games based on “level‐k” thinking, which describes behavior in many experiments with complete‐information games. We derive the model's implications in first‐ and second‐price auctions with general information structures, compare them to equilibrium and Eyster and Rabin's (2005) “cursed equilibrium,” and evaluate the model's potential to explain nonequilibrium bidding in auction experiments. The level‐k model generalizes many insights from equilibrium auction theory. It allows a unified explanation of the winner's curse in common‐value auctions and overbidding in those independent‐private‐value auctions without the uniform value distributions used in most experiments.  相似文献   

5.
Distributions of pathogen counts in treated water over time are highly skewed, power‐law‐like, and discrete. Over long periods of record, a long tail is observed, which can strongly determine the long‐term mean pathogen count and associated health effects. Such distributions have been modeled with the Poisson lognormal (PLN) computed (not closed‐form) distribution, and a new discrete growth distribution (DGD), also computed, recently proposed and demonstrated for microbial counts in water (Risk Analysis 29, 841–856). In this article, an error in the original theoretical development of the DGD is pointed out, and the approach is shown to support the closed‐form discrete Weibull (DW). Furthermore, an information‐theoretic derivation of the DGD is presented, explaining the fit shown for it to the original nine empirical and three simulated (n = 1,000) long‐term waterborne microbial count data sets. Both developments result from a theory of multiplicative growth of outcome size from correlated, entropy‐forced cause magnitudes. The predicted DW and DGD are first borne out in simulations of continuous and discrete correlated growth processes, respectively. Then the DW and DGD are each demonstrated to fit 10 of the original 12 data sets, passing the chi‐square goodness‐of‐fit test (α= 0.05, overall p = 0.1184). The PLN was not demonstrated, fitting only 4 of 12 data sets (p = 1.6 × 10?8), explained by cause magnitude correlation. Results bear out predictions of monotonically decreasing distributions, and suggest use of the DW for inhomogeneous counts correlated in time or space. A formula for computing the DW mean is presented.  相似文献   

6.
This paper analyzes the complexity of the contraction fixed point problem: compute an ε‐approximation to the fixed point V*Γ(V*) of a contraction mapping Γ that maps a Banach space Bd of continuous functions of d variables into itself. We focus on quasi linear contractions where Γ is a nonlinear functional of a finite number of conditional expectation operators. This class includes contractive Fredholm integral equations that arise in asset pricing applications and the contractive Bellman equation from dynamic programming. In the absence of further restrictions on the domain of Γ, the quasi linear fixed point problem is subject to the curse of dimensionality, i.e., in the worst case the minimal number of function evaluations and arithmetic operations required to compute an ε‐approximation to a fixed point V*∈Bd increases exponentially in d. We show that the curse of dimensionality disappears if the domain of Γ has additional special structure. We identify a particular type of special structure for which the problem is strongly tractable even in the worst case, i.e., the number of function evaluations and arithmetic operations needed to compute an ε‐approximation of V* is bounded by Cεp where C and p are constants independent of d. We present examples of economic problems that have this type of special structure including a class of rational expectations asset pricing problems for which the optimal exponent p1 is nearly achieved.  相似文献   

7.
Average rates of total dermal uptake (Kup) from short‐term (e.g., bathing) contact with dilute aqueous organic chemicals (DAOCs) are typically estimated from steady‐state in vitro diffusion‐cell measures of chemical permeability (Kp) through skin into receptor solution. Widely used (“PCR‐vitro”) methods estimate Kup by applying diffusion theory to increase Kp predictions made by a physico‐chemical regression (PCR) model that was fit to a large set of Kp measures. Here, Kup predictions for 18 DAOCs made by three PCR‐vitro models (EPA, NIOSH, and MH) were compared to previous in vivo measures obtained by methods unlikely to underestimate Kup. A new PCR model fit to all 18 measures is accurate to within approximately threefold (r = 0.91, p < 10?5), but the PCR‐vitro predictions (r > 0.63) all tend to underestimate the Kup measures by mean factors (UF, and p value for testing UF = 1) of 10 (EPA, p < 10?6), 11 (NIOSH, p < 10?8), and 6.2 (MH, p = 0.018). For all three PCR‐vitro models, log(UF) correlates negatively with molecular weight (r2 = 0.31 to 0.84, p = 0.017 to < 10?6) but not with log(vapor pressure) as an additional predictor (p > 0.05), so vapor pressure appears not to explain the significant in vivo/PCR‐vitro discrepancy. Until this discrepancy is explained, careful in vivo measures of Kup should be obtained for more chemicals, the expanded in vivo database should be compared to in vitro‐based predictions, and in vivo data should be considered in assessing aqueous dermal exposure and its uncertainty.  相似文献   

8.
We consider the bootstrap unit root tests based on finite order autoregressive integrated models driven by iid innovations, with or without deterministic time trends. A general methodology is developed to approximate asymptotic distributions for the models driven by integrated time series, and used to obtain asymptotic expansions for the Dickey–Fuller unit root tests. The second‐order terms in their expansions are of stochastic orders Op(n−1/4) and Op(n−1/2), and involve functionals of Brownian motions and normal random variates. The asymptotic expansions for the bootstrap tests are also derived and compared with those of the Dickey–Fuller tests. We show in particular that the bootstrap offers asymptotic refinements for the Dickey–Fuller tests, i.e., it corrects their second‐order errors. More precisely, it is shown that the critical values obtained by the bootstrap resampling are correct up to the second‐order terms, and the errors in rejection probabilities are of order o(n−1/2) if the tests are based upon the bootstrap critical values. Through simulations, we investigate how effective is the bootstrap correction in small samples.  相似文献   

9.
The Lp-min increment fit and Lp-min increment ultrametric fit problems are two popular optimization problems arising from distance methods for reconstructing phylogenetic trees. This paper proves1. An O(n2) algorithm for approximating L -min increment fit within ratio 3.2. A ratio-O n 1/p polynomial time approximation to Lp-min increment ultrametric fit.3. The neighbor-joining algorithm can correctly reconstruct a phylogenetic tree T when increment errors are small enough under L -norm.  相似文献   

10.
Invasive aspergillosis (IA) is a major cause of mortality in immunocompromized hosts, most often consecutive to the inhalation of spores of Aspergillus. However, the relationship between Aspergillus concentration in the air and probability of IA is not quantitatively known. In this study, this relationship was examined in a murine model of IA. Immunosuppressed Balb/c mice were exposed for 60 minutes at day 0 to an aerosol of A. fumigatus spores (Af293 strain). At day 10, IA was assessed in mice by quantitative culture of the lungs and galactomannan dosage. Fifteen separate nebulizations with varying spore concentrations were performed. Rates of IA ranged from 0% to 100% according to spore concentrations. The dose‐response relationship between probability of infection and spore exposure was approximated using the exponential model and the more flexible beta‐Poisson model. Prior distributions of the parameters of the models were proposed then updated with data in a Bayesian framework. Both models yielded close median dose‐responses of the posterior distributions for the main parameter of the model, but with different dispersions, either when the exposure dose was the concentration in the nebulized suspension or was the estimated quantity of spores inhaled by a mouse during the experiment. The median quantity of inhaled spores that infected 50% of mice was estimated at 1.8 × 104 and 3.2 × 104 viable spores in the exponential and beta‐Poisson models, respectively. This study provides dose‐response parameters for quantitative assessment of the relationship between airborne exposure to the reference A. fumigatus strain and probability of IA in immunocompromized hosts.  相似文献   

11.
Physiological daily inhalation rates reported in our previous study for normal‐weight subjects 2.6–96 years old were compared to inhalation data determined in free‐living overweight/obese individuals (n = 661) aged 5–96 years. Inhalation rates were also calculated in normal‐weight (n = 408), overweight (n = 225), and obese classes 1, 2, and 3 adults (n = 134) aged 20–96 years. These inhalation values were based on published indirect calorimetry measurements (n = 1,069) and disappearance rates of oral doses of water isotopes (i.e., 2H2O and H218O) monitored by gas isotope ratio mass spectrometry usually in urine samples for an aggregate period of over 16,000 days. Ventilatory equivalents for overweight/obese subjects at rest and during their aggregate daytime activities (28.99 ± 6.03 L to 34.82 ± 8.22 L of air inhaled/L of oxygen consumed; mean ±  SD) were determined and used for calculations of inhalation rates. The interindividual variability factor calculated as the ratio of the highest 99th percentile to the lowest 1st percentile of daily inhalation rates is higher for absolute data expressed in m3/day (26.7) compared to those of data in m3/kg‐day (12.2) and m3/m2‐day (5.9). Higher absolute rates generally found in overweight/obese individuals compared to their normal‐weight counterparts suggest higher intakes of air pollutants (in μg/day) for the former compared to the latter during identical exposure concentrations and conditions. Highest absolute mean (24.57 m3/day) and 99th percentile (55.55 m3/day) values were found in obese class 2 adults. They inhale on average 8.21 m3 more air per day than normal‐weight adults.  相似文献   

12.
This paper discusses a consistent bootstrap implementation of the likelihood ratio (LR) co‐integration rank test and associated sequential rank determination procedure of Johansen (1996). The bootstrap samples are constructed using the restricted parameter estimates of the underlying vector autoregressive (VAR) model that obtain under the reduced rank null hypothesis. A full asymptotic theory is provided that shows that, unlike the bootstrap procedure in Swensen (2006) where a combination of unrestricted and restricted estimates from the VAR model is used, the resulting bootstrap data are I(1) and satisfy the null co‐integration rank, regardless of the true rank. This ensures that the bootstrap LR test is asymptotically correctly sized and that the probability that the bootstrap sequential procedure selects a rank smaller than the true rank converges to zero. Monte Carlo evidence suggests that our bootstrap procedures work very well in practice.  相似文献   

13.
The independence number of a graph and its chromatic number are known to be hard to approximate. Due to recent complexity results, unless coRP = NP, there is no polynomial time algorithm which approximates any of these quantities within a factor of n 1– for graphs on n vertices.We show that the situation is significantly better for the average case. For every edge probability p = p(n) in the range n –1/2+ p 3/4, we present an approximation algorithm for the independence number of graphs on n vertices, whose approximation ratio is O((np)1/2/log n) and whose expected running time over the probability space G(n, p) is polynomial. An algorithm with similar features is described also for the chromatic number.A key ingredient in the analysis of both algorithms is a new large deviation inequality for eigenvalues of random matrices, obtained through an application of Talagrand's inequality.  相似文献   

14.
This paper considers studentized tests in time series regressions with nonparametrically autocorrelated errors. The studentization is based on robust standard errors with truncation lag M=bT for some constant b∈(0, 1] and sample size T. It is shown that the nonstandard fixed‐b limit distributions of such nonparametrically studentized tests provide more accurate approximations to the finite sample distributions than the standard small‐b limit distribution. We further show that, for typical economic time series, the optimal bandwidth that minimizes a weighted average of type I and type II errors is larger by an order of magnitude than the bandwidth that minimizes the asymptotic mean squared error of the corresponding long‐run variance estimator. A plug‐in procedure for implementing this optimal bandwidth is suggested and simulations (not reported here) confirm that the new plug‐in procedure works well in finite samples.  相似文献   

15.
Fixed effects estimators of panel models can be severely biased because of the well‐known incidental parameters problem. We show that this bias can be reduced by using a panel jackknife or an analytical bias correction motivated by large T. We give bias corrections for averages over the fixed effects, as well as model parameters. We find large bias reductions from using these approaches in examples. We consider asymptotics where T grows with n, as an approximation to the properties of the estimators in econometric applications. We show that if T grows at the same rate as n, the fixed effects estimator is asymptotically biased, so that asymptotic confidence intervals are incorrect, but that they are correct for the panel jackknife. We show T growing faster than n1/3 suffices for correctness of the analytic correction, a property we also conjecture for the jackknife.  相似文献   

16.
A new class of autocorrelation robust test statistics is introduced. The class of tests generalizes the Kiefer, Vogelsang, and Bunzel (2000) test in a manner analogous to Anderson and Darling's (1952) generalization of the Cramér–von Mises goodness of fit test. In a Gaussian location model, the error in rejection probability of the new tests is found to be O(T‐1logT), where T denotes the sample size.  相似文献   

17.
This paper is concerned with inference about a function g that is identified by a conditional moment restriction involving instrumental variables. The paper presents a test of the hypothesis that g belongs to a finite‐dimensional parametric family against a nonparametric alternative. The test does not require nonparametric estimation of g and is not subject to the ill‐posed inverse problem of nonparametric instrumental variables estimation. Under mild conditions, the test is consistent against any alternative model. In large samples, its power is arbitrarily close to 1 uniformly over a class of alternatives whose distance from the null hypothesis is O(n−1/2), where n is the sample size. In Monte Carlo simulations, the finite‐sample power of the new test exceeds that of existing tests.  相似文献   

18.
This paper proposes a test for common conditionally heteroskedastic (CH) features in asset returns. Following Engle and Kozicki (1993), the common CH features property is expressed in terms of testable overidentifying moment restrictions. However, as we show, these moment conditions have a degenerate Jacobian matrix at the true parameter value and therefore the standard asymptotic results of Hansen (1982) do not apply. We show in this context that Hansen's (1982) J‐test statistic is asymptotically distributed as the minimum of the limit of a certain random process with a markedly nonstandard distribution. If two assets are considered, this asymptotic distribution is a fifty–fifty mixture of χ2H−1 and χ2H, where H is the number of moment conditions, as opposed to a χ2H−1. With more than two assets, this distribution lies between the χ2Hp and χ2H (p denotes the number of parameters). These results show that ignoring the lack of first‐order identification of the moment condition model leads to oversized tests with a possibly increasing overrejection rate with the number of assets. A Monte Carlo study illustrates these findings.  相似文献   

19.
This paper considers the problem of choosing the number of bootstrap repetitions B for bootstrap standard errors, confidence intervals, confidence regions, hypothesis tests, p‐values, and bias correction. For each of these problems, the paper provides a three‐step method for choosing B to achieve a desired level of accuracy. Accuracy is measured by the percentage deviation of the bootstrap standard error estimate, confidence interval length, test's critical value, test's p‐value, or bias‐corrected estimate based on B bootstrap simulations from the corresponding ideal bootstrap quantities for which B=. The results apply quite generally to parametric, semiparametric, and nonparametric models with independent and dependent data. The results apply to the standard nonparametric iid bootstrap, moving block bootstraps for time series data, parametric and semiparametric bootstraps, and bootstraps for regression models based on bootstrapping residuals. Monte Carlo simulations show that the proposed methods work very well.  相似文献   

20.
This paper considers inference in a broad class of nonregular models. The models considered are nonregular in the sense that standard test statistics have asymptotic distributions that are discontinuous in some parameters. It is shown in Andrews and Guggenberger (2009a) that standard fixed critical value, subsampling, and m out of n bootstrap methods often have incorrect asymptotic size in such models. This paper introduces general methods of constructing tests and confidence intervals that have correct asymptotic size. In particular, we consider a hybrid subsampling/fixed‐critical‐value method and size‐correction methods. The paper discusses two examples in detail. They are (i) confidence intervals in an autoregressive model with a root that may be close to unity and conditional heteroskedasticity of unknown form and (ii) tests and confidence intervals based on a post‐conservative model selection estimator.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号