首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
A valid Edgeworth expansion is established for the limit distribution of density‐weighted semiparametric averaged derivative estimates of single index models. The leading term that corrects the normal limit varies in magnitude, depending on the choice of bandwidth and kernel order. In general this term has order larger than the n−1/2 that prevails in standard parametric problems, but we find circumstances in which it is O(n−1/2), thereby extending the achievement of an n−1/2 Berry‐Esseen bound in Robinson (1995a). A valid empirical Edgeworth expansion is also established. We also provide theoretical and empirical Edgeworth expansions for a studentized statistic, where some correction terms are different from those for the unstudentized case. We report a Monte Carlo study of finite sample performance.  相似文献   

2.
Cointegrated bivariate nonstationary time series are considered in a fractional context, without allowance for deterministic trends. Both the observable series and the cointegrating error can be fractional processes. The familiar situation in which the respective integration orders are 1 and 0 is nested, but these values have typically been assumed known. We allow one or more of them to be unknown real values, in which case Robinson and Marinucci (2001, 2003) have justified least squares estimates of the cointegrating vector, as well as narrow‐band frequency‐domain estimates, which may be less biased. While consistent, these estimates do not always have optimal convergence rates, and they have nonstandard limit distributional behavior. We consider estimates formulated in the frequency domain, that consequently allow for a wide variety of (parametric) autocorrelation in the short memory input series, as well as time‐domain estimates based on autoregressive transformation. Both can be interpreted as approximating generalized least squares and Gaussian maximum likelihood estimates. The estimates share the same limiting distribution, having mixed normal asymptotics (yielding Wald test statistics with χ2 null limit distributions), irrespective of whether the integration orders are known or unknown, subject in the latter case to their estimation with adequate rates of convergence. The parameters describing the short memory stationary input series are √n‐consistently estimable, but the assumptions imposed on these series are much more general than ones of autoregressive moving average type. A Monte Carlo study of finite‐sample performance is included.  相似文献   

3.
We consider the bootstrap unit root tests based on finite order autoregressive integrated models driven by iid innovations, with or without deterministic time trends. A general methodology is developed to approximate asymptotic distributions for the models driven by integrated time series, and used to obtain asymptotic expansions for the Dickey–Fuller unit root tests. The second‐order terms in their expansions are of stochastic orders Op(n−1/4) and Op(n−1/2), and involve functionals of Brownian motions and normal random variates. The asymptotic expansions for the bootstrap tests are also derived and compared with those of the Dickey–Fuller tests. We show in particular that the bootstrap offers asymptotic refinements for the Dickey–Fuller tests, i.e., it corrects their second‐order errors. More precisely, it is shown that the critical values obtained by the bootstrap resampling are correct up to the second‐order terms, and the errors in rejection probabilities are of order o(n−1/2) if the tests are based upon the bootstrap critical values. Through simulations, we investigate how effective is the bootstrap correction in small samples.  相似文献   

4.
We consider model based inference in a fractionally cointegrated (or cofractional) vector autoregressive model, based on the Gaussian likelihood conditional on initial values. We give conditions on the parameters such that the process Xt is fractional of order d and cofractional of order db; that is, there exist vectors β for which βXt is fractional of order db and no other fractionality order is possible. For b=1, the model nests the I(d−1) vector autoregressive model. We define the statistical model by 0 < bd, but conduct inference when the true values satisfy 0d0b0<1/2 and b0≠1/2, for which β0Xt is (asymptotically) a stationary process. Our main technical contribution is the proof of consistency of the maximum likelihood estimators. To this end, we prove weak convergence of the conditional likelihood as a continuous stochastic process in the parameters when errors are independent and identically distributed with suitable moment conditions and initial values are bounded. Because the limit is deterministic, this implies uniform convergence in probability of the conditional likelihood function. If the true value b0>1/2, we prove that the limit distribution of is mixed Gaussian, while for the remaining parameters it is Gaussian. The limit distribution of the likelihood ratio test for cointegration rank is a functional of fractional Brownian motion of type II. If b0<1/2, all limit distributions are Gaussian or chi‐squared. We derive similar results for the model with d = b, allowing for a constant term.  相似文献   

5.
It is widely known that when there are errors with a moving‐average root close to −1, a high order augmented autoregression is necessary for unit root tests to have good size, but that information criteria such as the AIC and the BIC tend to select a truncation lag (k) that is very small. We consider a class of Modified Information Criteria (MIC) with a penalty factor that is sample dependent. It takes into account the fact that the bias in the sum of the autoregressive coefficients is highly dependent on k and adapts to the type of deterministic components present. We use a local asymptotic framework in which the moving‐average root is local to −1 to document how the MIC performs better in selecting appropriate values of k. In Monte‐Carlo experiments, the MIC is found to yield huge size improvements to the DFGLS and the feasible point optimal PT test developed in Elliott, Rothenberg, and Stock (1996). We also extend the M tests developed in Perron and Ng (1996) to allow for GLS detrending of the data. The MIC along with GLS detrended data yield a set of tests with desirable size and power properties.  相似文献   

6.
We propose a semiparametric two‐step inference procedure for a finite‐dimensional parameter based on moment conditions constructed from high‐frequency data. The population moment conditions take the form of temporally integrated functionals of state‐variable processes that include the latent stochastic volatility process of an asset. In the first step, we nonparametrically recover the volatility path from high‐frequency asset returns. The nonparametric volatility estimator is then used to form sample moment functions in the second‐step GMM estimation, which requires the correction of a high‐order nonlinearity bias from the first step. We show that the proposed estimator is consistent and asymptotically mixed Gaussian and propose a consistent estimator for the conditional asymptotic variance. We also construct a Bierens‐type consistent specification test. These infill asymptotic results are based on a novel empirical‐process‐type theory for general integrated functionals of noisy semimartingale processes.  相似文献   

7.
In this paper, we propose an instrumental variable approach to constructing confidence sets (CS's) for the true parameter in models defined by conditional moment inequalities/equalities. We show that by properly choosing instrument functions, one can transform conditional moment inequalities/equalities into unconditional ones without losing identification power. Based on the unconditional moment inequalities/equalities, we construct CS's by inverting Cramér–von Mises‐type or Kolmogorov–Smirnov‐type tests. Critical values are obtained using generalized moment selection (GMS) procedures. We show that the proposed CS's have correct uniform asymptotic coverage probabilities. New methods are required to establish these results because an infinite‐dimensional nuisance parameter affects the asymptotic distributions. We show that the tests considered are consistent against all fixed alternatives and typically have power against n−1/2‐local alternatives to some, but not all, sequences of distributions in the null hypothesis. Monte Carlo simulations for five different models show that the methods perform well in finite samples.  相似文献   

8.
This paper considers the problem of choosing the number of bootstrap repetitions B for bootstrap standard errors, confidence intervals, confidence regions, hypothesis tests, p‐values, and bias correction. For each of these problems, the paper provides a three‐step method for choosing B to achieve a desired level of accuracy. Accuracy is measured by the percentage deviation of the bootstrap standard error estimate, confidence interval length, test's critical value, test's p‐value, or bias‐corrected estimate based on B bootstrap simulations from the corresponding ideal bootstrap quantities for which B=. The results apply quite generally to parametric, semiparametric, and nonparametric models with independent and dependent data. The results apply to the standard nonparametric iid bootstrap, moving block bootstraps for time series data, parametric and semiparametric bootstraps, and bootstraps for regression models based on bootstrapping residuals. Monte Carlo simulations show that the proposed methods work very well.  相似文献   

9.
This paper considers random coefficients binary choice models. The main goal is to estimate the density of the random coefficients nonparametrically. This is an ill‐posed inverse problem characterized by an integral transform. A new density estimator for the random coefficients is developed, utilizing Fourier–Laplace series on spheres. This approach offers a clear insight on the identification problem. More importantly, it leads to a closed form estimator formula that yields a simple plug‐in procedure requiring no numerical optimization. The new estimator, therefore, is easy to implement in empirical applications, while being flexible about the treatment of unobserved heterogeneity. Extensions including treatments of nonrandom coefficients and models with endogeneity are discussed.  相似文献   

10.
The block bootstrap is the best known bootstrap method for time‐series data when the analyst does not have a parametric model that reduces the data generation process to simple random sampling. However, the errors made by the block bootstrap converge to zero only slightly faster than those made by first‐order asymptotic approximations. This paper describes a bootstrap procedure for data that are generated by a Markov process or a process that can be approximated by a Markov process with sufficient accuracy. The procedure is based on estimating the Markov transition density nonparametrically. Bootstrap samples are obtained by sampling the process implied by the estimated transition density. Conditions are given under which the errors made by the Markov bootstrap converge to zero more rapidly than those made by the block bootstrap.  相似文献   

11.
This paper presents a new approach to estimation and inference in panel data models with a general multifactor error structure. The unobserved factors and the individual‐specific errors are allowed to follow arbitrary stationary processes, and the number of unobserved factors need not be estimated. The basic idea is to filter the individual‐specific regressors by means of cross‐section averages such that asymptotically as the cross‐section dimension (N) tends to infinity, the differential effects of unobserved common factors are eliminated. The estimation procedure has the advantage that it can be computed by least squares applied to auxiliary regressions where the observed regressors are augmented with cross‐sectional averages of the dependent variable and the individual‐specific regressors. A number of estimators (referred to as common correlated effects (CCE) estimators) are proposed and their asymptotic distributions are derived. The small sample properties of mean group and pooled CCE estimators are investigated by Monte Carlo experiments, showing that the CCE estimators have satisfactory small sample properties even under a substantial degree of heterogeneity and dynamics, and for relatively small values of N and T.  相似文献   

12.
《Risk analysis》2018,38(1):194-209
This article presents the findings from a numerical simulation study that was conducted to evaluate the performance of alternative statistical analysis methods for background screening assessments when data sets are generated with incremental sampling methods (ISMs). A wide range of background and site conditions are represented in order to test different ISM sampling designs. Both hypothesis tests and upper tolerance limit (UTL) screening methods were implemented following U.S. Environmental Protection Agency (USEPA) guidance for specifying error rates. The simulations show that hypothesis testing using two‐sample t ‐tests can meet standard performance criteria under a wide range of conditions, even with relatively small sample sizes. Key factors that affect the performance include unequal population variances and small absolute differences in population means. UTL methods are generally not recommended due to conceptual limitations in the technique when applied to ISM data sets from single decision units and due to insufficient power given standard statistical sample sizes from ISM.  相似文献   

13.
This paper is concerned with inference about a function g that is identified by a conditional moment restriction involving instrumental variables. The paper presents a test of the hypothesis that g belongs to a finite‐dimensional parametric family against a nonparametric alternative. The test does not require nonparametric estimation of g and is not subject to the ill‐posed inverse problem of nonparametric instrumental variables estimation. Under mild conditions, the test is consistent against any alternative model. In large samples, its power is arbitrarily close to 1 uniformly over a class of alternatives whose distance from the null hypothesis is O(n−1/2), where n is the sample size. In Monte Carlo simulations, the finite‐sample power of the new test exceeds that of existing tests.  相似文献   

14.
It is well known that the finite‐sample properties of tests of hypotheses on the co‐integrating vectors in vector autoregressive models can be quite poor, and that current solutions based on Bartlett‐type corrections or bootstrap based on unrestricted parameter estimators are unsatisfactory, in particular in those cases where also asymptotic χ2 tests fail most severely. In this paper, we solve this inference problem by showing the novel result that a bootstrap test where the null hypothesis is imposed on the bootstrap sample is asymptotically valid. That is, not only does it have asymptotically correct size, but, in contrast to what is claimed in existing literature, it is consistent under the alternative. Compared to the theory for bootstrap tests on the co‐integration rank (Cavaliere, Rahbek, and Taylor, 2012), establishing the validity of the bootstrap in the framework of hypotheses on the co‐integrating vectors requires new theoretical developments, including the introduction of multivariate Ornstein–Uhlenbeck processes with random (reduced rank) drift parameters. Finally, as documented by Monte Carlo simulations, the bootstrap test outperforms existing methods.  相似文献   

15.
A nonparametric, residual‐based block bootstrap procedure is proposed in the context of testing for integrated (unit root) time series. The resampling procedure is based on weak assumptions on the dependence structure of the stationary process driving the random walk and successfully generates unit root integrated pseudo‐series retaining the important characteristics of the data. It is more general than previous bootstrap approaches to the unit root problem in that it allows for a very wide class of weakly dependent processes and it is not based on any parametric assumption on the process generating the data. As a consequence the procedure can accurately capture the distribution of many unit root test statistics proposed in the literature. Large sample theory is developed and the asymptotic validity of the block bootstrap‐based unit root testing is shown via a bootstrap functional limit theorem. Applications to some particular test statistics of the unit root hypothesis, i.e., least squares and Dickey‐Fuller type statistics are given. The power properties of our procedure are investigated and compared to those of alternative bootstrap approaches to carry out the unit root test. Some simulations examine the finite sample performance of our procedure.  相似文献   

16.
In dynamic discrete choice analysis, controlling for unobserved heterogeneity is an important issue, and finite mixture models provide flexible ways to account for it. This paper studies nonparametric identifiability of type probabilities and type‐specific component distributions in finite mixture models of dynamic discrete choices. We derive sufficient conditions for nonparametric identification for various finite mixture models of dynamic discrete choices used in applied work under different assumptions on the Markov property, stationarity, and type‐invariance in the transition process. Three elements emerge as the important determinants of identification: the time‐dimension of panel data, the number of values the covariates can take, and the heterogeneity of the response of different types to changes in the covariates. For example, in a simple case where the transition function is type‐invariant, a time‐dimension of T = 3 is sufficient for identification, provided that the number of values the covariates can take is no smaller than the number of types and that the changes in the covariates induce sufficiently heterogeneous variations in the choice probabilities across types. Identification is achieved even when state dependence is present if a model is stationary first‐order Markovian and the panel has a moderate time‐dimension (T 6).  相似文献   

17.
This paper considers tests of the parameter on an endogenous variable in an instrumental variables regression model. The focus is on determining tests that have some optimal power properties. We start by considering a model with normally distributed errors and known error covariance matrix. We consider tests that are similar and satisfy a natural rotational invariance condition. We determine a two‐sided power envelope for invariant similar tests. This allows us to assess and compare the power properties of tests such as the conditional likelihood ratio (CLR), the Lagrange multiplier, and the Anderson–Rubin tests. We find that the CLR test is quite close to being uniformly most powerful invariant among a class of two‐sided tests. The finite‐sample results of the paper are extended to the case of unknown error covariance matrix and possibly nonnormal errors via weak instrument asymptotics. Strong instrument asymptotic results also are provided because we seek tests that perform well under both weak and strong instruments.  相似文献   

18.
This research studies the p‐robust supply chain network design with uncertain demand and cost scenarios. The optimal design integrates the supplier selection together with the facility location and capacity problem. We provide a new framework to obtain the relative regret limit, which is critical in the robust supply chain design but is assumed to be a known value in the existing literature. We obtain lower and upper bounds for relative regret limit and obtain a sequence of optimal solutions for series relative regret limits between the upper and lower bounds. An algorithm for p‐robust supply chain network design is provided. A series of numerical examples are designed to find the properties of the bottleneck scenarios. A scenario with low probability and a low optimal objective function value for the scenario has a greater chance of being a bottleneck. To focus only on the influence from the relative regret, we also introduce three separate new objective functions in p‐robust design. The proposed new theories and approaches provide a sequence of options for decision makers to reduce the marketing risks effectively in supply chain network design.  相似文献   

19.
This paper considers inference in a broad class of nonregular models. The models considered are nonregular in the sense that standard test statistics have asymptotic distributions that are discontinuous in some parameters. It is shown in Andrews and Guggenberger (2009a) that standard fixed critical value, subsampling, and m out of n bootstrap methods often have incorrect asymptotic size in such models. This paper introduces general methods of constructing tests and confidence intervals that have correct asymptotic size. In particular, we consider a hybrid subsampling/fixed‐critical‐value method and size‐correction methods. The paper discusses two examples in detail. They are (i) confidence intervals in an autoregressive model with a root that may be close to unity and conditional heteroskedasticity of unknown form and (ii) tests and confidence intervals based on a post‐conservative model selection estimator.  相似文献   

20.
This paper uses revealed preference inequalities to provide the tightest possible (best) nonparametric bounds on predicted consumer responses to price changes using consumer‐level data over a finite set of relative price changes. These responses are allowed to vary nonparametrically across the income distribution. This is achieved by combining the theory of revealed preference with the semiparametric estimation of consumer expansion paths (Engel curves). We label these expansion path based bounds on demand responses as E‐bounds. Deviations from revealed preference restrictions are measured by preference perturbations which are shown to usefully characterize taste change and to provide a stochastic environment within which violations of revealed preference inequalities can be assessed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号