首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
A valid Edgeworth expansion is established for the limit distribution of density‐weighted semiparametric averaged derivative estimates of single index models. The leading term that corrects the normal limit varies in magnitude, depending on the choice of bandwidth and kernel order. In general this term has order larger than the n−1/2 that prevails in standard parametric problems, but we find circumstances in which it is O(n−1/2), thereby extending the achievement of an n−1/2 Berry‐Esseen bound in Robinson (1995a). A valid empirical Edgeworth expansion is also established. We also provide theoretical and empirical Edgeworth expansions for a studentized statistic, where some correction terms are different from those for the unstudentized case. We report a Monte Carlo study of finite sample performance.  相似文献   

2.
This paper presents a new test for fractionally integrated (FI) processes. In particular, we propose a testing procedure in the time domain that extends the well–known Dickey–Fuller approach, originally designed for the I(1) versus I(0) case, to the more general setup of FI(d0) versus FI(d1), with d1<d0. When d0=1, the proposed test statistics are based on the OLS estimator, or its t–ratio, of the coefficient on Δd1yt−1 in a regression of Δyt on Δd1yt−1 and, possibly, some lags of Δyt. When d1 is not taken to be known a priori, a pre–estimation of d1 is needed to implement the test. We show that the choice of any T1/2–consistent estimator of d1∈[0 ,1) suffices to make the test feasible, while achieving asymptotic normality. Monte–Carlo simulations support the analytical results derived in the paper and show that proposed tests fare very well, both in terms of power and size, when compared with others available in the literature. The paper ends with two empirical applications.  相似文献   

3.
The topic of this paper is inference in models in which parameters are defined by moment inequalities and/or equalities. The parameters may or may not be identified. This paper introduces a new class of confidence sets and tests based on generalized moment selection (GMS). GMS procedures are shown to have correct asymptotic size in a uniform sense and are shown not to be asymptotically conservative. The power of GMS tests is compared to that of subsampling, m out of n bootstrap, and “plug‐in asymptotic” (PA) tests. The latter three procedures are the only general procedures in the literature that have been shown to have correct asymptotic size (in a uniform sense) for the moment inequality/equality model. GMS tests are shown to have asymptotic power that dominates that of subsampling, m out of n bootstrap, and PA tests. Subsampling and m out of n bootstrap tests are shown to have asymptotic power that dominates that of PA tests.  相似文献   

4.
We study a Monte Carlo algorithm for computing marginal and stationary densities of stochastic models with the Markov property, establishing global asymptotic normality and OP(n–1/2) convergence. Asymptotic normality is used to derive error bounds in terms of the distribution of the norm deviation.  相似文献   

5.
This paper proposes a test for common conditionally heteroskedastic (CH) features in asset returns. Following Engle and Kozicki (1993), the common CH features property is expressed in terms of testable overidentifying moment restrictions. However, as we show, these moment conditions have a degenerate Jacobian matrix at the true parameter value and therefore the standard asymptotic results of Hansen (1982) do not apply. We show in this context that Hansen's (1982) J‐test statistic is asymptotically distributed as the minimum of the limit of a certain random process with a markedly nonstandard distribution. If two assets are considered, this asymptotic distribution is a fifty–fifty mixture of χ2H−1 and χ2H, where H is the number of moment conditions, as opposed to a χ2H−1. With more than two assets, this distribution lies between the χ2Hp and χ2H (p denotes the number of parameters). These results show that ignoring the lack of first‐order identification of the moment condition model leads to oversized tests with a possibly increasing overrejection rate with the number of assets. A Monte Carlo study illustrates these findings.  相似文献   

6.
It is well known that the finite‐sample properties of tests of hypotheses on the co‐integrating vectors in vector autoregressive models can be quite poor, and that current solutions based on Bartlett‐type corrections or bootstrap based on unrestricted parameter estimators are unsatisfactory, in particular in those cases where also asymptotic χ2 tests fail most severely. In this paper, we solve this inference problem by showing the novel result that a bootstrap test where the null hypothesis is imposed on the bootstrap sample is asymptotically valid. That is, not only does it have asymptotically correct size, but, in contrast to what is claimed in existing literature, it is consistent under the alternative. Compared to the theory for bootstrap tests on the co‐integration rank (Cavaliere, Rahbek, and Taylor, 2012), establishing the validity of the bootstrap in the framework of hypotheses on the co‐integrating vectors requires new theoretical developments, including the introduction of multivariate Ornstein–Uhlenbeck processes with random (reduced rank) drift parameters. Finally, as documented by Monte Carlo simulations, the bootstrap test outperforms existing methods.  相似文献   

7.
This paper is concerned with inference about a function g that is identified by a conditional moment restriction involving instrumental variables. The paper presents a test of the hypothesis that g belongs to a finite‐dimensional parametric family against a nonparametric alternative. The test does not require nonparametric estimation of g and is not subject to the ill‐posed inverse problem of nonparametric instrumental variables estimation. Under mild conditions, the test is consistent against any alternative model. In large samples, its power is arbitrarily close to 1 uniformly over a class of alternatives whose distance from the null hypothesis is O(n−1/2), where n is the sample size. In Monte Carlo simulations, the finite‐sample power of the new test exceeds that of existing tests.  相似文献   

8.
We propose bootstrap methods for a general class of nonlinear transformations of realized volatility which includes the raw version of realized volatility and its logarithmic transformation as special cases. We consider the independent and identically distributed (i.i.d.) bootstrap and the wild bootstrap (WB), and prove their first‐order asymptotic validity under general assumptions on the log‐price process that allow for drift and leverage effects. We derive Edgeworth expansions in a simpler model that rules out these effects. The i.i.d. bootstrap provides a second‐order asymptotic refinement when volatility is constant, but not otherwise. The WB yields a second‐order asymptotic refinement under stochastic volatility provided we choose the external random variable used to construct the WB data appropriately. None of these methods provides third‐order asymptotic refinements. Both methods improve upon the first‐order asymptotic theory in finite samples.  相似文献   

9.
A nonparametric, residual‐based block bootstrap procedure is proposed in the context of testing for integrated (unit root) time series. The resampling procedure is based on weak assumptions on the dependence structure of the stationary process driving the random walk and successfully generates unit root integrated pseudo‐series retaining the important characteristics of the data. It is more general than previous bootstrap approaches to the unit root problem in that it allows for a very wide class of weakly dependent processes and it is not based on any parametric assumption on the process generating the data. As a consequence the procedure can accurately capture the distribution of many unit root test statistics proposed in the literature. Large sample theory is developed and the asymptotic validity of the block bootstrap‐based unit root testing is shown via a bootstrap functional limit theorem. Applications to some particular test statistics of the unit root hypothesis, i.e., least squares and Dickey‐Fuller type statistics are given. The power properties of our procedure are investigated and compared to those of alternative bootstrap approaches to carry out the unit root test. Some simulations examine the finite sample performance of our procedure.  相似文献   

10.
This paper considers inference on functionals of semi/nonparametric conditional moment restrictions with possibly nonsmooth generalized residuals, which include all of the (nonlinear) nonparametric instrumental variables (IV) as special cases. These models are often ill‐posed and hence it is difficult to verify whether a (possibly nonlinear) functional is root‐n estimable or not. We provide computationally simple, unified inference procedures that are asymptotically valid regardless of whether a functional is root‐n estimable or not. We establish the following new useful results: (1) the asymptotic normality of a plug‐in penalized sieve minimum distance (PSMD) estimator of a (possibly nonlinear) functional; (2) the consistency of simple sieve variance estimators for the plug‐in PSMD estimator, and hence the asymptotic chi‐square distribution of the sieve Wald statistic; (3) the asymptotic chi‐square distribution of an optimally weighted sieve quasi likelihood ratio (QLR) test under the null hypothesis; (4) the asymptotic tight distribution of a non‐optimally weighted sieve QLR statistic under the null; (5) the consistency of generalized residual bootstrap sieve Wald and QLR tests; (6) local power properties of sieve Wald and QLR tests and of their bootstrap versions; (7) asymptotic properties of sieve Wald and SQLR for functionals of increasing dimension. Simulation studies and an empirical illustration of a nonparametric quantile IV regression are presented.  相似文献   

11.
This paper considers inference in a broad class of nonregular models. The models considered are nonregular in the sense that standard test statistics have asymptotic distributions that are discontinuous in some parameters. It is shown in Andrews and Guggenberger (2009a) that standard fixed critical value, subsampling, and m out of n bootstrap methods often have incorrect asymptotic size in such models. This paper introduces general methods of constructing tests and confidence intervals that have correct asymptotic size. In particular, we consider a hybrid subsampling/fixed‐critical‐value method and size‐correction methods. The paper discusses two examples in detail. They are (i) confidence intervals in an autoregressive model with a root that may be close to unity and conditional heteroskedasticity of unknown form and (ii) tests and confidence intervals based on a post‐conservative model selection estimator.  相似文献   

12.
Local to unity limit theory is used in applications to construct confidence intervals (CIs) for autoregressive roots through inversion of a unit root test (Stock (1991)). Such CIs are asymptotically valid when the true model has an autoregressive root that is local to unity (ρ = 1 + c/n), but are shown here to be invalid at the limits of the domain of definition of the localizing coefficient c because of a failure in tightness and the escape of probability mass. Failure at the boundary implies that these CIs have zero asymptotic coverage probability in the stationary case and vicinities of unity that are wider than O(n−1/3). The inversion methods of Hansen (1999) and Mikusheva (2007) are asymptotically valid in such cases. Implications of these results for predictive regression tests are explored. When the predictive regressor is stationary, the popular Campbell and Yogo (2006) CIs for the regression coefficient have zero coverage probability asymptotically, and their predictive test statistic Q erroneously indicates predictability with probability approaching unity when the null of no predictability holds. These results have obvious cautionary implications for the use of the procedures in empirical practice.  相似文献   

13.
We develop results for the use of Lasso and post‐Lasso methods to form first‐stage predictions and estimate optimal instruments in linear instrumental variables (IV) models with many instruments, p. Our results apply even when p is much larger than the sample size, n. We show that the IV estimator based on using Lasso or post‐Lasso in the first stage is root‐n consistent and asymptotically normal when the first stage is approximately sparse, that is, when the conditional expectation of the endogenous variables given the instruments can be well‐approximated by a relatively small set of variables whose identities may be unknown. We also show that the estimator is semiparametrically efficient when the structural error is homoscedastic. Notably, our results allow for imperfect model selection, and do not rely upon the unrealistic “beta‐min” conditions that are widely used to establish validity of inference following model selection (see also Belloni, Chernozhukov, and Hansen (2011b)). In simulation experiments, the Lasso‐based IV estimator with a data‐driven penalty performs well compared to recently advocated many‐instrument robust procedures. In an empirical example dealing with the effect of judicial eminent domain decisions on economic outcomes, the Lasso‐based IV estimator outperforms an intuitive benchmark. Optimal instruments are conditional expectations. In developing the IV results, we establish a series of new results for Lasso and post‐Lasso estimators of nonparametric conditional expectation functions which are of independent theoretical and practical interest. We construct a modification of Lasso designed to deal with non‐Gaussian, heteroscedastic disturbances that uses a data‐weighted 1‐penalty function. By innovatively using moderate deviation theory for self‐normalized sums, we provide convergence rates for the resulting Lasso and post‐Lasso estimators that are as sharp as the corresponding rates in the homoscedastic Gaussian case under the condition that logp = o(n1/3). We also provide a data‐driven method for choosing the penalty level that must be specified in obtaining Lasso and post‐Lasso estimates and establish its asymptotic validity under non‐Gaussian, heteroscedastic disturbances.  相似文献   

14.
In this paper an O(n2) mathematical formulation for in silico sequence selection in de novo protein design proposed by Klepeis et al. (2003, 2004), in which the number of additional variables and linear constraints scales with the square of the number of binary variables, is compared to three O(n) formulations. It is found that the O(n2) formulation is superior to the O(n) formulations on most sequence search spaces. The superiority of the O(n2) formulation is due to the reformulation linearization techniques (RLTs), since the O(n2) formulation without RLTs is found to be computationally less efficient than the O(n) formulations. In addition, new algorithmic enhancing components of RLTs with inequality constraints, triangle inequalities, and Dead-End Elimination (DEE) type preprocessing are added to the O(n2) formulation. The current best O(n2) formulation, which is the original formulation from Klepeis et al. (2003, 2004) plus DEE type preprocessing, is proposed for in silico sequence search. For a test problem with a search space of 3.4×1045 sequences, this new improved model is able to reduce the required CPU time by 67%.  相似文献   

15.
This paper establishes the higher‐order equivalence of the k‐step bootstrap, introduced recently by Davidson and MacKinnon (1999), and the standard bootstrap. The k‐step bootstrap is a very attractive alternative computationally to the standard bootstrap for statistics based on nonlinear extremum estimators, such as generalized method of moment and maximum likelihood estimators. The paper also extends results of Hall and Horowitz (1996) to provide new results regarding the higher‐order improvements of the standard bootstrap and the k‐step bootstrap for extremum estimators (compared to procedures based on first‐order asymptotics). The results of the paper apply to Newton‐Raphson (NR), default NR, line‐search NR, and Gauss‐Newton k‐step bootstrap procedures. The results apply to the nonparametric iid bootstrap and nonoverlapping and overlapping block bootstraps. The results cover symmetric and equal‐tailed two‐sided t tests and confidence intervals, one‐sided t tests and confidence intervals, Wald tests and confidence regions, and J tests of over‐identifying restrictions.  相似文献   

16.
We investigated the problem of constructing the maximum consensus tree from rooted triples. We showed the NP-hardness of the problem and developed exact and heuristic algorithms. The exact algorithm is based on the dynamic programming strategy and runs in O((m + n 2)3 n ) time and O(2 n ) space. The heuristic algorithms run in polynomial time and their performances are tested and shown by comparing with the optimal solutions. In the tests, the worst and average relative error ratios are 1.200 and 1.072 respectively. We also implemented the two heuristic algorithms proposed by Gasieniec et al. The experimental result shows that our heuristic algorithm is better than theirs in most of the tests.  相似文献   

17.
We develop a new test of a parametric model of a conditional mean function against a nonparametric alternative. The test adapts to the unknown smoothness of the alternative model and is uniformly consistent against alternatives whose distance from the parametric model converges to zero at the fastest possible rate. This rate is slower than n−1/2. Some existing tests have nontrivial power against restricted classes of alternatives whose distance from the parametric model decreases at the rate n−1/2. There are, however, sequences of alternatives against which these tests are inconsistent and ours is consistent. As a consequence, there are alternative models for which the finite‐sample power of our test greatly exceeds that of existing tests. This conclusion is illustrated by the results of some Monte Carlo experiments.  相似文献   

18.
In this paper, we propose an instrumental variable approach to constructing confidence sets (CS's) for the true parameter in models defined by conditional moment inequalities/equalities. We show that by properly choosing instrument functions, one can transform conditional moment inequalities/equalities into unconditional ones without losing identification power. Based on the unconditional moment inequalities/equalities, we construct CS's by inverting Cramér–von Mises‐type or Kolmogorov–Smirnov‐type tests. Critical values are obtained using generalized moment selection (GMS) procedures. We show that the proposed CS's have correct uniform asymptotic coverage probabilities. New methods are required to establish these results because an infinite‐dimensional nuisance parameter affects the asymptotic distributions. We show that the tests considered are consistent against all fixed alternatives and typically have power against n−1/2‐local alternatives to some, but not all, sequences of distributions in the null hypothesis. Monte Carlo simulations for five different models show that the methods perform well in finite samples.  相似文献   

19.
We study the problem of separating sublinear time computations via approximating the diameter for a sequence S=p 1 p 2 ⋅⋅⋅ p n of points in a metric space, in which any two consecutive points have the same distance. The computation is considered respectively under deterministic, zero error randomized, and bounded error randomized models. We obtain a class of separations using various versions of the approximate diameter problem based on restrictions on input data. We derive tight sublinear time separations for each of the three computation models via proving that computation with O(n r ) time is strictly more powerful than that with O(n rε ) time, where r and ε are arbitrary parameters in (0,1) and (0,r) respectively. We show that, for any parameter r∈(0,1), the bounded error randomized sublinear time computation in time O(n r ) cannot be simulated by any zero error randomized sublinear time algorithm in o(n) time or queries; and the same is true for zero error randomized computation versus deterministic computation.  相似文献   

20.
The asymptotic refinements attributable to the block bootstrap for time series are not as large as those of the nonparametric iid bootstrap or the parametric bootstrap. One reason is that the independence between the blocks in the block bootstrap sample does not mimic the dependence structure of the original sample. This is the join‐point problem. In this paper, we propose a method of solving this problem. The idea is not to alter the block bootstrap. Instead, we alter the original sample statistics to which the block bootstrap is applied. We introduce block statistics that possess join‐point features that are similar to those of the block bootstrap versions of these statistics. We refer to the application of the block bootstrap to block statistics as the block–block bootstrap. The asymptotic refinements of the block–block bootstrap are shown to be greater than those obtained with the block bootstrap and close to those obtained with the nonparametric iid bootstrap and parametric bootstrap.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号