首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 343 毫秒
1.
This paper develops an asymptotic theory for time series binary choice models with nonstationary explanatory variables generated as integrated processes. Both logit and probit models are covered. The maximum likelihood (ML) estimator is consistent but a new phenomenon arises in its limit distribution theory. The estimator consists of a mixture of two components, one of which is parallel to and the other orthogonal to the direction of the true parameter vector, with the latter being the principal component. The ML estimator is shown to converge at a rate of n3/4 along its principal component but has the slower rate of n1/4 convergence in all other directions. This is the first instance known to the authors of multiple convergence rates in models where the regressors have the same (full rank) stochastic order and where the parameters appear in linear forms of these regressors. It is a consequence of the fact that the estimating equations involve nonlinear integrable transformations of linear forms of integrated processes as well as polynomials in these processes, and the asymptotic behavior of these elements is quite different. The limit distribution of the ML estimator is derived and is shown to be a mixture of two mixed normal distributions with mixing variates that are dependent upon Brownian local time as well as Brownian motion. It is further shown that the sample proportion of binary choices follows an arc sine law and therefore spends most of its time in the neighborhood of zero or unity. The result has implications for policy decision making that involves binary choices and where the decisions depend on economic fundamentals that involve stochastic trends. Our limit theory shows that, in such conditions, policy is likely to manifest streams of little intervention or intensive intervention.  相似文献   

2.
This paper develops a regression limit theory for nonstationary panel data with large numbers of cross section (n) and time series (T) observations. The limit theory allows for both sequential limits, wherein T followed by n, and joint limits where T, n simultaneously; and the relationship between these multidimensional limits is explored. The panel structures considered allow for no time series cointegration, heterogeneous cointegration, homogeneous cointegration, and near-homogeneous cointegration. The paper explores the existence of long-run average relations between integrated panel vectors when there is no individual time series cointegration and when there is heterogeneous cointegration. These relations are parameterized in terms of the matrix regression coefficient of the long-run average covariance matrix. In the case of homogeneous and near homogeneous cointegrating panels, a panel fully modified regression estimator is developed and studied. The limit theory enables us to test hypotheses about the long run average parameters both within and between subgroups of the full population.  相似文献   

3.
We propose a novel statistic for conducting joint tests on all the structural parameters in instrumental variables regression. The statistic is straightforward to compute and equals a quadratic form of the score of the concentrated log–likelihood. It therefore attains its minimal value equal to zero at the maximum likelihood estimator. The statistic has a χ2 limiting distribution with a degrees of freedom parameter equal to the number of structural parameters. The limiting distribution does not depend on nuisance parameters. The statistic overcomes the deficiencies of the Anderson–Rubin statistic, whose limiting distribution has a degrees of freedom parameter equal to the number of instruments, and the likelihood based, Wald, likelihood ratio, and Lagrange multiplier statistics, whose limiting distributions depend on nuisance parameters. Size and power comparisons reveal that the statistic is a (asymptotic) size–corrected likelihood ratio statistic. We apply the statistic to the Angrist–Krueger (1991) data and find similar results as in Staiger and Stock (1997).  相似文献   

4.
We analyze under which conditions a given vector field can be disaggregated as a linear combination of gradients. This problem is typical of aggregation theory, as illustrated by the literature on the characterization of aggregate market demand and excess demand. We argue that exterior differential calculus provides very useful tools to address these problems. In particular, we show, using these techniques, that any analytic mapping in Rn satisfying Walras Law can be locally decomposed as the sum of n individual, utility-maximizing market demand functions. In addition, we show that the result holds for arbitrary (price-dependent) income distributions, and that the decomposition can be chosen such that it varies continuously with the mapping. Finally, when income distribution can be freely chosen, then decomposition requires only n/2 agents.  相似文献   

5.
This paper applies some general concepts in decision theory to a linear panel data model. A simple version of the model is an autoregression with a separate intercept for each unit in the cross section, with errors that are independent and identically distributed with a normal distribution. There is a parameter of interest γ and a nuisance parameter τ, a N×K matrix, where N is the cross‐section sample size. The focus is on dealing with the incidental parameters problem created by a potentially high‐dimension nuisance parameter. We adopt a “fixed‐effects” approach that seeks to protect against any sequence of incidental parameters. We transform τ to (δ, ρ, ω), where δ is a J×K matrix of coefficients from the least‐squares projection of τ on a N×J matrix x of strictly exogenous variables, ρ is a K×K symmetric, positive semidefinite matrix obtained from the residual sums of squares and cross‐products in the projection of τ on x, and ω is a (NJ) ×K matrix whose columns are orthogonal and have unit length. The model is invariant under the actions of a group on the sample space and the parameter space, and we find a maximal invariant statistic. The distribution of the maximal invariant statistic does not depend upon ω. There is a unique invariant distribution for ω. We use this invariant distribution as a prior distribution to obtain an integrated likelihood function. It depends upon the observation only through the maximal invariant statistic. We use the maximal invariant statistic to construct a marginal likelihood function, so we can eliminate ω by integration with respect to the invariant prior distribution or by working with the marginal likelihood function. The two approaches coincide. Decision rules based on the invariant distribution for ω have a minimax property. Given a loss function that does not depend upon ω and given a prior distribution for (γ, δ, ρ), we show how to minimize the average—with respect to the prior distribution for (γ, δ, ρ)—of the maximum risk, where the maximum is with respect to ω. There is a family of prior distributions for (δ, ρ) that leads to a simple closed form for the integrated likelihood function. This integrated likelihood function coincides with the likelihood function for a normal, correlated random‐effects model. Under random sampling, the corresponding quasi maximum likelihood estimator is consistent for γ as N→∞, with a standard limiting distribution. The limit results do not require normality or homoskedasticity (conditional on x) assumptions.  相似文献   

6.
We consider the bootstrap unit root tests based on finite order autoregressive integrated models driven by iid innovations, with or without deterministic time trends. A general methodology is developed to approximate asymptotic distributions for the models driven by integrated time series, and used to obtain asymptotic expansions for the Dickey–Fuller unit root tests. The second‐order terms in their expansions are of stochastic orders Op(n−1/4) and Op(n−1/2), and involve functionals of Brownian motions and normal random variates. The asymptotic expansions for the bootstrap tests are also derived and compared with those of the Dickey–Fuller tests. We show in particular that the bootstrap offers asymptotic refinements for the Dickey–Fuller tests, i.e., it corrects their second‐order errors. More precisely, it is shown that the critical values obtained by the bootstrap resampling are correct up to the second‐order terms, and the errors in rejection probabilities are of order o(n−1/2) if the tests are based upon the bootstrap critical values. Through simulations, we investigate how effective is the bootstrap correction in small samples.  相似文献   

7.
We propose an estimation method for models of conditional moment restrictions, which contain finite dimensional unknown parameters (θ) and infinite dimensional unknown functions (h). Our proposal is to approximate h with a sieve and to estimate θ and the sieve parameters jointly by applying the method of minimum distance. We show that: (i) the sieve estimator of h is consistent with a rate faster than n‐1/4 under certain metric; (ii) the estimator of θ is √n consistent and asymptotically normally distributed; (iii) the estimator for the asymptotic covariance of the θ estimator is consistent and easy to compute; and (iv) the optimally weighted minimum distance estimator of θ attains the semiparametric efficiency bound. We illustrate our results with two examples: a partially linear regression with an endogenous nonparametric part, and a partially additive IV regression with a link function.  相似文献   

8.
This paper establishes that instruments enable the identification of nonparametric regression models in the presence of measurement error by providing a closed form solution for the regression function in terms of Fourier transforms of conditional expectations of observable variables. For parametrically specified regression functions, we propose a root n consistent and asymptotically normal estimator that takes the familiar form of a generalized method of moments estimator with a plugged‐in nonparametric kernel density estimate. Both the identification and the estimation methodologies rely on Fourier analysis and on the theory of generalized functions. The finite‐sample properties of the estimator are investigated through Monte Carlo simulations.  相似文献   

9.
We present a polynomial-time perfect sampler for the Q-Ising with a vertex-independent noise. The Q-Ising, one of the generalized models of the Ising, arose in the context of Bayesian image restoration in statistical mechanics. We study the distribution of Q-Ising on a two-dimensional square lattice over n vertices, that is, we deal with a discrete state space {1,…,Q} n for a positive integer Q. Employing the Q-Ising (having a parameter β) as a prior distribution, and assuming a Gaussian noise (having another parameter α), a posterior is obtained from the Bayes’ formula. Furthermore, we generalize it: the distribution of noise is not necessarily a Gaussian, but any vertex-independent noise. We first present a Gibbs sampler from our posterior, and also present a perfect sampler by defining a coupling via a monotone update function. Then, we show O(nlog n) mixing time of the Gibbs sampler for the generalized model under a condition that β is sufficiently small (whatever the distribution of noise is). In case of a Gaussian, we obtain another more natural condition for rapid mixing that α is sufficiently larger than β. Thereby, we show that the expected running time of our sampler is O(nlog n).  相似文献   

10.
In this paper, we study the least squares (LS) estimator in a linear panel regression model with unknown number of factors appearing as interactive fixed effects. Assuming that the number of factors used in estimation is larger than the true number of factors in the data, we establish the limiting distribution of the LS estimator for the regression coefficients as the number of time periods and the number of cross‐sectional units jointly go to infinity. The main result of the paper is that under certain assumptions, the limiting distribution of the LS estimator is independent of the number of factors used in the estimation as long as this number is not underestimated. The important practical implication of this result is that for inference on the regression coefficients, one does not necessarily need to estimate the number of interactive fixed effects consistently.  相似文献   

11.
This paper considers inference on functionals of semi/nonparametric conditional moment restrictions with possibly nonsmooth generalized residuals, which include all of the (nonlinear) nonparametric instrumental variables (IV) as special cases. These models are often ill‐posed and hence it is difficult to verify whether a (possibly nonlinear) functional is root‐n estimable or not. We provide computationally simple, unified inference procedures that are asymptotically valid regardless of whether a functional is root‐n estimable or not. We establish the following new useful results: (1) the asymptotic normality of a plug‐in penalized sieve minimum distance (PSMD) estimator of a (possibly nonlinear) functional; (2) the consistency of simple sieve variance estimators for the plug‐in PSMD estimator, and hence the asymptotic chi‐square distribution of the sieve Wald statistic; (3) the asymptotic chi‐square distribution of an optimally weighted sieve quasi likelihood ratio (QLR) test under the null hypothesis; (4) the asymptotic tight distribution of a non‐optimally weighted sieve QLR statistic under the null; (5) the consistency of generalized residual bootstrap sieve Wald and QLR tests; (6) local power properties of sieve Wald and QLR tests and of their bootstrap versions; (7) asymptotic properties of sieve Wald and SQLR for functionals of increasing dimension. Simulation studies and an empirical illustration of a nonparametric quantile IV regression are presented.  相似文献   

12.
We study the problem of separating sublinear time computations via approximating the diameter for a sequence S=p 1 p 2 ⋅⋅⋅ p n of points in a metric space, in which any two consecutive points have the same distance. The computation is considered respectively under deterministic, zero error randomized, and bounded error randomized models. We obtain a class of separations using various versions of the approximate diameter problem based on restrictions on input data. We derive tight sublinear time separations for each of the three computation models via proving that computation with O(n r ) time is strictly more powerful than that with O(n rε ) time, where r and ε are arbitrary parameters in (0,1) and (0,r) respectively. We show that, for any parameter r∈(0,1), the bounded error randomized sublinear time computation in time O(n r ) cannot be simulated by any zero error randomized sublinear time algorithm in o(n) time or queries; and the same is true for zero error randomized computation versus deterministic computation.  相似文献   

13.
In this paper an O(n2) mathematical formulation for in silico sequence selection in de novo protein design proposed by Klepeis et al. (2003, 2004), in which the number of additional variables and linear constraints scales with the square of the number of binary variables, is compared to three O(n) formulations. It is found that the O(n2) formulation is superior to the O(n) formulations on most sequence search spaces. The superiority of the O(n2) formulation is due to the reformulation linearization techniques (RLTs), since the O(n2) formulation without RLTs is found to be computationally less efficient than the O(n) formulations. In addition, new algorithmic enhancing components of RLTs with inequality constraints, triangle inequalities, and Dead-End Elimination (DEE) type preprocessing are added to the O(n2) formulation. The current best O(n2) formulation, which is the original formulation from Klepeis et al. (2003, 2004) plus DEE type preprocessing, is proposed for in silico sequence search. For a test problem with a search space of 3.4×1045 sequences, this new improved model is able to reduce the required CPU time by 67%.  相似文献   

14.
Local to unity limit theory is used in applications to construct confidence intervals (CIs) for autoregressive roots through inversion of a unit root test (Stock (1991)). Such CIs are asymptotically valid when the true model has an autoregressive root that is local to unity (ρ = 1 + c/n), but are shown here to be invalid at the limits of the domain of definition of the localizing coefficient c because of a failure in tightness and the escape of probability mass. Failure at the boundary implies that these CIs have zero asymptotic coverage probability in the stationary case and vicinities of unity that are wider than O(n−1/3). The inversion methods of Hansen (1999) and Mikusheva (2007) are asymptotically valid in such cases. Implications of these results for predictive regression tests are explored. When the predictive regressor is stationary, the popular Campbell and Yogo (2006) CIs for the regression coefficient have zero coverage probability asymptotically, and their predictive test statistic Q erroneously indicates predictability with probability approaching unity when the null of no predictability holds. These results have obvious cautionary implications for the use of the procedures in empirical practice.  相似文献   

15.
A valid Edgeworth expansion is established for the limit distribution of density‐weighted semiparametric averaged derivative estimates of single index models. The leading term that corrects the normal limit varies in magnitude, depending on the choice of bandwidth and kernel order. In general this term has order larger than the n−1/2 that prevails in standard parametric problems, but we find circumstances in which it is O(n−1/2), thereby extending the achievement of an n−1/2 Berry‐Esseen bound in Robinson (1995a). A valid empirical Edgeworth expansion is also established. We also provide theoretical and empirical Edgeworth expansions for a studentized statistic, where some correction terms are different from those for the unstudentized case. We report a Monte Carlo study of finite sample performance.  相似文献   

16.
We investigated the problem of constructing the maximum consensus tree from rooted triples. We showed the NP-hardness of the problem and developed exact and heuristic algorithms. The exact algorithm is based on the dynamic programming strategy and runs in O((m + n 2)3 n ) time and O(2 n ) space. The heuristic algorithms run in polynomial time and their performances are tested and shown by comparing with the optimal solutions. In the tests, the worst and average relative error ratios are 1.200 and 1.072 respectively. We also implemented the two heuristic algorithms proposed by Gasieniec et al. The experimental result shows that our heuristic algorithm is better than theirs in most of the tests.  相似文献   

17.
We study the problem of (off-line) broadcast scheduling in minimizing total flow time and propose a dynamic programming approach to compute an optimal broadcast schedule. Suppose the broadcast server has k pages and the last page request arrives at time n. The optimal schedule can be computed in O(k3(n+k)k−1) time for the case that the server has a single broadcast channel. For m channels case, i.e., the server can broadcast m different pages at a time where m < k, the optimal schedule can be computed in O(nkm) time when k and m are constants. Note that this broadcast scheduling problem is NP-hard when k is a variable and will take O(nkm+1) time when k is fixed and m ≥ 1 with the straightforward implementation of the dynamic programming approach. The preliminary version of this paper appeared in Proceedings of the 11th Annual International Computing and Combinatorics Conference as “Off-line Algorithms for Minimizing the Total Flow Time in Broadcast Scheduling”.  相似文献   

18.
This paper considers tests for structural instability of short duration, such as at the end of the sample. The key feature of the testing problem is that the number, m, of observations in the period of potential change is relatively small—possibly as small as one. The well‐known F test of Chow (1960) for this problem only applies in a linear regression model with normally distributed iid errors and strictly exogenous regressors, even when the total number of observations, n+m, is large. We generalize the F test to cover regression models with much more general error processes, regressors that are not strictly exogenous, and estimation by instrumental variables as well as least squares. In addition, we extend the F test to nonlinear models estimated by generalized method of moments and maximum likelihood. Asymptotic critical values that are valid as n→∞ with m fixed are provided using a subsampling‐like method. The results apply quite generally to processes that are strictly stationary and ergodic under the null hypothesis of no structural instability.  相似文献   

19.
This paper examines the problem of testing and confidence set construction for one‐dimensional functions of the coefficients in autoregressive (AR(p)) models with potentially persistent time series. The primary example concerns inference on impulse responses. A new asymptotic framework is suggested and some new theoretical properties of known procedures are demonstrated. I show that the likelihood ratio (LR) and LR± statistics for a linear hypothesis in an AR(p) can be uniformly approximated by a weighted average of local‐to‐unity and normal distributions. The corresponding weights depend on the weight placed on the largest root in the null hypothesis. The suggested approximation is uniform over the set of all linear hypotheses. The same family of distributions approximates the LR and LR± statistics for tests about impulse responses, and the approximation is uniform over the horizon of the impulse response. I establish the size properties of tests about impulse responses proposed by Inoue and Kilian (2002) and Gospodinov (2004), and theoretically explain some of the empirical findings of Pesavento and Rossi (2007). An adaptation of the grid bootstrap for impulse response functions is suggested and its properties are examined.  相似文献   

20.
Sequence alignment is a central problem in bioinformatics. The classical dynamic programming algorithm aligns two sequences by optimizing over possible insertions, deletions and substitutions. However, other evolutionary events can be observed, such as inversions, tandem duplications or moves (transpositions). It has been established that the extension of the problem to move operations is NP-complete. Previous work has shown that an extension restricted to non-overlapping inversions can be solved in O(n 3) with a restricted scoring scheme. In this paper, we show that the alignment problem extended to non-overlapping moves can be solved in O(n 5) for general scoring schemes, O(n 4log n) for concave scoring schemes and O(n 4) for restricted scoring schemes. Furthermore, we show that the alignment problem extended to non-overlapping moves, inversions and tandem duplications can be solved with the same time complexities. Finally, an example of an alignment with non-overlapping moves is provided. A preliminary version of this paper appeared in the Proceedings of COCOON 2007, LNCS, vol. 4598, pp. 151–164.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号