首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This paper establishes the higher‐order equivalence of the k‐step bootstrap, introduced recently by Davidson and MacKinnon (1999), and the standard bootstrap. The k‐step bootstrap is a very attractive alternative computationally to the standard bootstrap for statistics based on nonlinear extremum estimators, such as generalized method of moment and maximum likelihood estimators. The paper also extends results of Hall and Horowitz (1996) to provide new results regarding the higher‐order improvements of the standard bootstrap and the k‐step bootstrap for extremum estimators (compared to procedures based on first‐order asymptotics). The results of the paper apply to Newton‐Raphson (NR), default NR, line‐search NR, and Gauss‐Newton k‐step bootstrap procedures. The results apply to the nonparametric iid bootstrap and nonoverlapping and overlapping block bootstraps. The results cover symmetric and equal‐tailed two‐sided t tests and confidence intervals, one‐sided t tests and confidence intervals, Wald tests and confidence regions, and J tests of over‐identifying restrictions.  相似文献   

2.
We consider semiparametric estimation of the memory parameter in a model that includes as special cases both long‐memory stochastic volatility and fractionally integrated exponential GARCH (FIEGARCH) models. Under our general model the logarithms of the squared returns can be decomposed into the sum of a long‐memory signal and a white noise. We consider periodogram‐based estimators using a local Whittle criterion function. We allow the optional inclusion of an additional term to account for possible correlation between the signal and noise processes, as would occur in the FIEGARCH model. We also allow for potential nonstationarity in volatility by allowing the signal process to have a memory parameter d*1/2. We show that the local Whittle estimator is consistent for d*∈(0,1). We also show that the local Whittle estimator is asymptotically normal for d*∈(0,3/4) and essentially recovers the optimal semiparametric rate of convergence for this problem. In particular, if the spectral density of the short‐memory component of the signal is sufficiently smooth, a convergence rate of n2/5−δ for d*∈(0,3/4) can be attained, where n is the sample size and δ>0 is arbitrarily small. This represents a strong improvement over the performance of existing semiparametric estimators of persistence in volatility. We also prove that the standard Gaussian semiparametric estimator is asymptotically normal if d*=0. This yields a test for long memory in volatility.  相似文献   

3.
This paper considers inference in a broad class of nonregular models. The models considered are nonregular in the sense that standard test statistics have asymptotic distributions that are discontinuous in some parameters. It is shown in Andrews and Guggenberger (2009a) that standard fixed critical value, subsampling, and m out of n bootstrap methods often have incorrect asymptotic size in such models. This paper introduces general methods of constructing tests and confidence intervals that have correct asymptotic size. In particular, we consider a hybrid subsampling/fixed‐critical‐value method and size‐correction methods. The paper discusses two examples in detail. They are (i) confidence intervals in an autoregressive model with a root that may be close to unity and conditional heteroskedasticity of unknown form and (ii) tests and confidence intervals based on a post‐conservative model selection estimator.  相似文献   

4.
The effect of bioaerosol size was incorporated into predictive dose‐response models for the effects of inhaled aerosols of Francisella tularensis (the causative agent of tularemia) on rhesus monkeys and guinea pigs with bioaerosol diameters ranging between 1.0 and 24 μm. Aerosol‐size‐dependent models were formulated as modification of the exponential and β‐Poisson dose‐response models and model parameters were estimated using maximum likelihood methods and multiple data sets of quantal dose‐response data for which aerosol sizes of inhaled doses were known. Analysis of F. tularensis dose‐response data was best fit by an exponential dose‐response model with a power function including the particle diameter size substituting for the rate parameter k scaling the applied dose. There were differences in the pathogen's aerosol‐size‐dependence equation and models that better represent the observed dose‐response results than the estimate derived from applying the model developed by the International Commission on Radiological Protection (ICRP, 1994) that relies on differential regional lung deposition for human particle exposure.  相似文献   

5.
This paper is concerned with tests and confidence intervals for parameters that are not necessarily point identified and are defined by moment inequalities. In the literature, different test statistics, critical‐value methods, and implementation methods (i.e., the asymptotic distribution versus the bootstrap) have been proposed. In this paper, we compare these methods. We provide a recommended test statistic, moment selection critical value, and implementation method. We provide data‐dependent procedures for choosing the key moment selection tuning parameter κ and a size‐correction factor η.  相似文献   

6.
In this paper, we propose an instrumental variable approach to constructing confidence sets (CS's) for the true parameter in models defined by conditional moment inequalities/equalities. We show that by properly choosing instrument functions, one can transform conditional moment inequalities/equalities into unconditional ones without losing identification power. Based on the unconditional moment inequalities/equalities, we construct CS's by inverting Cramér–von Mises‐type or Kolmogorov–Smirnov‐type tests. Critical values are obtained using generalized moment selection (GMS) procedures. We show that the proposed CS's have correct uniform asymptotic coverage probabilities. New methods are required to establish these results because an infinite‐dimensional nuisance parameter affects the asymptotic distributions. We show that the tests considered are consistent against all fixed alternatives and typically have power against n−1/2‐local alternatives to some, but not all, sequences of distributions in the null hypothesis. Monte Carlo simulations for five different models show that the methods perform well in finite samples.  相似文献   

7.
This paper introduces a nonparametric Granger‐causality test for covariance stationary linear processes under, possibly, the presence of long‐range dependence. We show that the test is consistent and has power against contiguous alternatives converging to the parametric rate T−1/2. Since the test is based on estimates of the parameters of the representation of a VAR model as a, possibly, two‐sided infinite distributed lag model, we first show that a modification of Hannan's (1963, 1967) estimator is root‐ T consistent and asymptotically normal for the coefficients of such a representation. When the data are long‐range dependent, this method of estimation becomes more attractive than least squares, since the latter can be neither root‐ T consistent nor asymptotically normal as is the case with short‐range dependent data.  相似文献   

8.
This paper considers inference on functionals of semi/nonparametric conditional moment restrictions with possibly nonsmooth generalized residuals, which include all of the (nonlinear) nonparametric instrumental variables (IV) as special cases. These models are often ill‐posed and hence it is difficult to verify whether a (possibly nonlinear) functional is root‐n estimable or not. We provide computationally simple, unified inference procedures that are asymptotically valid regardless of whether a functional is root‐n estimable or not. We establish the following new useful results: (1) the asymptotic normality of a plug‐in penalized sieve minimum distance (PSMD) estimator of a (possibly nonlinear) functional; (2) the consistency of simple sieve variance estimators for the plug‐in PSMD estimator, and hence the asymptotic chi‐square distribution of the sieve Wald statistic; (3) the asymptotic chi‐square distribution of an optimally weighted sieve quasi likelihood ratio (QLR) test under the null hypothesis; (4) the asymptotic tight distribution of a non‐optimally weighted sieve QLR statistic under the null; (5) the consistency of generalized residual bootstrap sieve Wald and QLR tests; (6) local power properties of sieve Wald and QLR tests and of their bootstrap versions; (7) asymptotic properties of sieve Wald and SQLR for functionals of increasing dimension. Simulation studies and an empirical illustration of a nonparametric quantile IV regression are presented.  相似文献   

9.
We consider the bootstrap unit root tests based on finite order autoregressive integrated models driven by iid innovations, with or without deterministic time trends. A general methodology is developed to approximate asymptotic distributions for the models driven by integrated time series, and used to obtain asymptotic expansions for the Dickey–Fuller unit root tests. The second‐order terms in their expansions are of stochastic orders Op(n−1/4) and Op(n−1/2), and involve functionals of Brownian motions and normal random variates. The asymptotic expansions for the bootstrap tests are also derived and compared with those of the Dickey–Fuller tests. We show in particular that the bootstrap offers asymptotic refinements for the Dickey–Fuller tests, i.e., it corrects their second‐order errors. More precisely, it is shown that the critical values obtained by the bootstrap resampling are correct up to the second‐order terms, and the errors in rejection probabilities are of order o(n−1/2) if the tests are based upon the bootstrap critical values. Through simulations, we investigate how effective is the bootstrap correction in small samples.  相似文献   

10.
This paper examines the problem of testing and confidence set construction for one‐dimensional functions of the coefficients in autoregressive (AR(p)) models with potentially persistent time series. The primary example concerns inference on impulse responses. A new asymptotic framework is suggested and some new theoretical properties of known procedures are demonstrated. I show that the likelihood ratio (LR) and LR± statistics for a linear hypothesis in an AR(p) can be uniformly approximated by a weighted average of local‐to‐unity and normal distributions. The corresponding weights depend on the weight placed on the largest root in the null hypothesis. The suggested approximation is uniform over the set of all linear hypotheses. The same family of distributions approximates the LR and LR± statistics for tests about impulse responses, and the approximation is uniform over the horizon of the impulse response. I establish the size properties of tests about impulse responses proposed by Inoue and Kilian (2002) and Gospodinov (2004), and theoretically explain some of the empirical findings of Pesavento and Rossi (2007). An adaptation of the grid bootstrap for impulse response functions is suggested and its properties are examined.  相似文献   

11.
Independent sets, induced matchings and cliques are examples of regular induced subgraphs in a graph. In this paper, we prove that finding a maximum cardinality k-regular induced subgraph is an NP-hard problem for any fixed value of k. We propose a convex quadratic upper bound on the size of a k-regular induced subgraph and characterize those graphs for which this bound is attained. Finally, we extend the Hoffman bound on the size of a maximum 0-regular subgraph (the independence number) from k=0 to larger values of k.  相似文献   

12.
Mechanism design enables a social planner to obtain a desired outcome by leveraging the players' rationality and their beliefs. It is thus a fundamental, but yet unproven, intuition that the higher the level of rationality of the players, the better the set of obtainable outcomes. In this paper, we prove this fundamental intuition for players with possibilistic beliefs, a model long considered in epistemic game theory. Specifically, • We define a sequence of monotonically increasing revenue benchmarks for single‐good auctions, G0G1G2≤⋯, where each Gi is defined over the players' beliefs and G0 is the second‐highest valuation (i.e., the revenue benchmark achieved by the second‐price mechanism). • We (1) construct a single, interim individually rational, auction mechanism that, without any clue about the rationality level of the players, guarantees revenue Gk if all players have rationality levels ≥k+1, and (2) prove that no such mechanism can guarantee revenue even close to Gk when at least two players are at most level‐k rational.  相似文献   

13.
This paper considers the problem of choosing the number of bootstrap repetitions B for bootstrap standard errors, confidence intervals, confidence regions, hypothesis tests, p‐values, and bias correction. For each of these problems, the paper provides a three‐step method for choosing B to achieve a desired level of accuracy. Accuracy is measured by the percentage deviation of the bootstrap standard error estimate, confidence interval length, test's critical value, test's p‐value, or bias‐corrected estimate based on B bootstrap simulations from the corresponding ideal bootstrap quantities for which B=. The results apply quite generally to parametric, semiparametric, and nonparametric models with independent and dependent data. The results apply to the standard nonparametric iid bootstrap, moving block bootstraps for time series data, parametric and semiparametric bootstraps, and bootstraps for regression models based on bootstrapping residuals. Monte Carlo simulations show that the proposed methods work very well.  相似文献   

14.
We extend Ellsberg's two‐urn paradox and propose three symmetric forms of partial ambiguity by limiting the possible compositions in a deck of 100 red and black cards in three ways. Interval ambiguity involves a symmetric range of 50 − n to 50 + n red cards. Complementarily, disjoint ambiguity arises from two nonintersecting intervals of 0 to n and 100 − n to 100 red cards. Two‐point ambiguity involves n or 100 − n red cards. We investigate experimentally attitudes towards partial ambiguity and the corresponding compound lotteries in which the possible compositions are drawn with equal objective probabilities. This yields three key findings: distinct attitudes towards the three forms of partial ambiguity, significant association across attitudes towards partial ambiguity and compound risk, and source preference between two‐point ambiguity and two‐point compound risk. Our findings help discriminate among models of ambiguity in the literature.  相似文献   

15.
Inspired by phylogenetic tree construction in computational biology, Lin et al. (The 11th Annual International Symposium on Algorithms and Computation (ISAAC 2000), pp. 539–551, 2000) introduced the notion of a k -phylogenetic root. A k-phylogenetic root of a graph G is a tree T such that the leaves of T are the vertices of G, two vertices are adjacent in G precisely if they are within distance k in T, and all non-leaf vertices of T have degree at least three. The k-phylogenetic root problem is to decide whether such a tree T exists for a given graph G. In addition to introducing this problem, Lin et al. designed linear time constructive algorithms for k≤4, while left the problem open for k≥5. In this paper, we partially fill this hole by giving a linear time constructive algorithm to decide whether a given tree chordal graph has a 5-phylogenetic root; this is the largest class of graphs known to have such a construction.  相似文献   

16.
Balding et al. (1995) showed that randomizing over the k-set space yields much better pooling designs than the random pooling design without the k-restriction. A natural question arises as to whether a smaller subspace, i.e., a space with more structure, will yield even better results. We take the random subset containment design recently proposed by Macula, which randomizes over a subspace of the k-set space, as our guinea pig to compare with the k-set space. Unfortunately the performance of the subset containment design is hard to analyze and only approximations are given. For a set of parameters, we are able to produce either an exact analysis or very good approximations. The comparisons under these parameters seem to favor the k-set space.  相似文献   

17.
This paper considers studentized tests in time series regressions with nonparametrically autocorrelated errors. The studentization is based on robust standard errors with truncation lag M=bT for some constant b∈(0, 1] and sample size T. It is shown that the nonstandard fixed‐b limit distributions of such nonparametrically studentized tests provide more accurate approximations to the finite sample distributions than the standard small‐b limit distribution. We further show that, for typical economic time series, the optimal bandwidth that minimizes a weighted average of type I and type II errors is larger by an order of magnitude than the bandwidth that minimizes the asymptotic mean squared error of the corresponding long‐run variance estimator. A plug‐in procedure for implementing this optimal bandwidth is suggested and simulations (not reported here) confirm that the new plug‐in procedure works well in finite samples.  相似文献   

18.
We test the portability of level‐0 assumptions in level‐k theory in an experimental investigation of behavior in Coordination, Discoordination, and Hide and Seek games with common, non‐neutral frames. Assuming that level‐0 behavior depends only on the frame, we derive hypotheses that are independent of prior assumptions about salience. Those hypotheses are not confirmed. Our findings contrast with previous research which has fitted parameterized level‐k models to Hide and Seek data. We show that, as a criterion of successful explanation, the existence of a plausible model that replicates the main patterns in these data has a high probability of false positives.  相似文献   

19.
We study a strategic information management problem in the export‐processing trade, where the buyer controls the raw material input and sales and the producer is responsible for production. The production is vulnerable to random yield risk. The producer can exert a costly effort to acquire the private yield rate information and discretionarily share it with the buyer. We develop a sequential Bayesian game model that captures three key features of the system—endogenous information endowment, voluntary disclosure, and ex post information sharing—a significant departure from the literature. The optimal disclosure strategy is driven by the trade‐off between the gains from Pareto efficiency improvement and self‐interested overproduction. It is specified by two thresholds on yield rate: only the middle‐yield producers (with yield rate between these two thresholds) share private information to improve supply‐demand match; the low‐ and high‐yield producers withhold information to extract excess input from the buyer. The buyer in response penalizes nondisclosure with reduced input and rewards information sharing with a larger order. This strategic interaction is further exacerbated by the double marginalization effect from decentralization, resulting in severe efficiency loss. We examine the effectiveness of three corrective mechanisms—vertical integration, mandatory disclosure, and production restriction—and reveal the costs of information suppressive effect and overinvestment incentive and the benefit from concessions on the processing fee. Our study endogenizes the asymmetric supply risk and provides the first attempt to rationalize the strategic interactions of informational and operational incentives in the export‐processing system.  相似文献   

20.
In this paper we study high‐dimensional time series that have the generalized dynamic factor structure. We develop a test of the null of k0 factors against the alternative that the number of factors is larger than k0 but no larger than k1>k0. Our test statistic equals maxk0<k k1k−γk+1)(γk+1−γk+2), where γi is the ith largest eigenvalue of the smoothed periodogram estimate of the spectral density matrix of data at a prespecified frequency. We describe the asymptotic distribution of the statistic, as the dimensionality and the number of observations rise, as a function of the Tracy–Widom distribution and tabulate the critical values of the test. As an application, we test different hypotheses about the number of dynamic factors in macroeconomic time series and about the number of dynamic factors driving excess stock returns.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号