首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In this paper we study identification and estimation of a correlated random coefficients (CRC) panel data model. The outcome of interest varies linearly with a vector of endogenous regressors. The coefficients on these regressors are heterogenous across units and may covary with them. We consider the average partial effect (APE) of a small change in the regressor vector on the outcome (cf. Chamberlain (1984), Wooldridge (2005a)). Chamberlain (1992) calculated the semiparametric efficiency bound for the APE in our model and proposed a √N‐consistent estimator. Nonsingularity of the APE's information bound, and hence the appropriateness of Chamberlain's (1992) estimator, requires (i) the time dimension of the panel (T) to strictly exceed the number of random coefficients (p) and (ii) strong conditions on the time series properties of the regressor vector. We demonstrate irregular identification of the APE when T = p and for more persistent regressor processes. Our approach exploits the different identifying content of the subpopulations of stayers—or units whose regressor values change little across periods—and movers—or units whose regressor values change substantially across periods. We propose a feasible estimator based on our identification result and characterize its large sample properties. While irregularity precludes our estimator from attaining parametric rates of convergence, its limiting distribution is normal and inference is straightforward to conduct. Standard software may be used to compute point estimates and standard errors. We use our methods to estimate the average elasticity of calorie consumption with respect to total outlay for a sample of poor Nicaraguan households.  相似文献   

2.
This paper presents a new approach to estimation and inference in panel data models with a general multifactor error structure. The unobserved factors and the individual‐specific errors are allowed to follow arbitrary stationary processes, and the number of unobserved factors need not be estimated. The basic idea is to filter the individual‐specific regressors by means of cross‐section averages such that asymptotically as the cross‐section dimension (N) tends to infinity, the differential effects of unobserved common factors are eliminated. The estimation procedure has the advantage that it can be computed by least squares applied to auxiliary regressions where the observed regressors are augmented with cross‐sectional averages of the dependent variable and the individual‐specific regressors. A number of estimators (referred to as common correlated effects (CCE) estimators) are proposed and their asymptotic distributions are derived. The small sample properties of mean group and pooled CCE estimators are investigated by Monte Carlo experiments, showing that the CCE estimators have satisfactory small sample properties even under a substantial degree of heterogeneity and dynamics, and for relatively small values of N and T.  相似文献   

3.
This paper develops an asymptotic theory for time series binary choice models with nonstationary explanatory variables generated as integrated processes. Both logit and probit models are covered. The maximum likelihood (ML) estimator is consistent but a new phenomenon arises in its limit distribution theory. The estimator consists of a mixture of two components, one of which is parallel to and the other orthogonal to the direction of the true parameter vector, with the latter being the principal component. The ML estimator is shown to converge at a rate of n3/4 along its principal component but has the slower rate of n1/4 convergence in all other directions. This is the first instance known to the authors of multiple convergence rates in models where the regressors have the same (full rank) stochastic order and where the parameters appear in linear forms of these regressors. It is a consequence of the fact that the estimating equations involve nonlinear integrable transformations of linear forms of integrated processes as well as polynomials in these processes, and the asymptotic behavior of these elements is quite different. The limit distribution of the ML estimator is derived and is shown to be a mixture of two mixed normal distributions with mixing variates that are dependent upon Brownian local time as well as Brownian motion. It is further shown that the sample proportion of binary choices follows an arc sine law and therefore spends most of its time in the neighborhood of zero or unity. The result has implications for policy decision making that involves binary choices and where the decisions depend on economic fundamentals that involve stochastic trends. Our limit theory shows that, in such conditions, policy is likely to manifest streams of little intervention or intensive intervention.  相似文献   

4.
Noel D Uri 《Omega》1977,5(4):463-472
It has recently been shown that the Box-Jenkins approach to forecasting time series is superior to an econometric approach over a relatively short horizon. The results here support this contention. A combination of the two approaches, however, proves to be clearly superior to either one separately. By taking account of changes in economic and weather related variables in a time series model, improved forecasts are obtained.  相似文献   

5.
This paper considers tests for structural instability of short duration, such as at the end of the sample. The key feature of the testing problem is that the number, m, of observations in the period of potential change is relatively small—possibly as small as one. The well‐known F test of Chow (1960) for this problem only applies in a linear regression model with normally distributed iid errors and strictly exogenous regressors, even when the total number of observations, n+m, is large. We generalize the F test to cover regression models with much more general error processes, regressors that are not strictly exogenous, and estimation by instrumental variables as well as least squares. In addition, we extend the F test to nonlinear models estimated by generalized method of moments and maximum likelihood. Asymptotic critical values that are valid as n→∞ with m fixed are provided using a subsampling‐like method. The results apply quite generally to processes that are strictly stationary and ergodic under the null hypothesis of no structural instability.  相似文献   

6.
In recent years, time series analysts have shifted their interest from univariate to multivariate forecasting approaches. Among them, the Box-Jenkins transfer function process and the state space method have received the most attention. This paper presents a simplified approach that embodies some desirable features of existing methods. It stresses empirical analysis, has a unified modeling structure, is easily applicable, and is adaptive to changes without necessitating prior information on the evolution of a system under study. The core of the method relies on the Carbone-Longini adaptive estimation procedure (AEP). Results of a comparative study based on the well-known Lydia E. Pinkham data and the Box-Jenkins sales/leading indicator data illustrate the merits of multivariate AEP in improving forecasting accuracy while simplifying the analysis process. Subject Area: Forecasting.  相似文献   

7.
We develop a √n‐consistent and asymptotically normal estimator of the parameters (regression coefficients and threshold points) of a semiparametric ordered response model under the assumption of independence of errors and regressors. The independence assumption implies shift restrictions allowing identification of threshold points up to location and scale. The estimator is useful in various applications, particularly in new product demand forecasting from survey data subject to systematic misreporting. We apply the estimator to assess exaggeration bias in survey data on demand for a new telecommunications service.  相似文献   

8.
This paper addresses aggregation in integer autoregressive moving average (INARMA) models. Although aggregation in continuous-valued time series has been widely discussed, the same is not true for integer-valued time series. Forecast horizon aggregation is addressed in this paper. It is shown that the overlapping forecast horizon aggregation of an INARMA process results in an INARMA process. The conditional expected value of the aggregated process is also derived for use in forecasting. A simulation experiment is conducted to assess the accuracy of the forecasts produced by the aggregation method and to compare it to the accuracy of cumulative h-step ahead forecasts over the forecasting horizon. The results of an empirical analysis are also provided.  相似文献   

9.
In retailing operations, retailers face the challenge of incomplete demand information. We develop a new concept named K‐approximate convexity, which is shown to be a generalization of K‐convexity, to address this challenge. This idea is applied to obtain a base‐stock list‐price policy for the joint inventory and pricing control problem with incomplete demand information and even non‐concave revenue function. A worst‐case performance bound of the policy is established. In a numerical study where demand is driven from real sales data, we find that the average gap between the profits of our proposed policy and the optimal policy is 0.27%, and the maximum gap is 4.6%.  相似文献   

10.
This paper considers large N and large T panel data models with unobservable multiple interactive effects, which are correlated with the regressors. In earnings studies, for example, workers' motivation, persistence, and diligence combined to influence the earnings in addition to the usual argument of innate ability. In macroeconomics, interactive effects represent unobservable common shocks and their heterogeneous impacts on cross sections. We consider identification, consistency, and the limiting distribution of the interactive‐effects estimator. Under both large N and large T, the estimator is shown to be consistent, which is valid in the presence of correlations and heteroskedasticities of unknown form in both dimensions. We also derive the constrained estimator and its limiting distribution, imposing additivity coupled with interactive effects. The problem of testing additive versus interactive effects is also studied. In addition, we consider identification and estimation of models in the presence of a grand mean, time‐invariant regressors, and common regressors. Given identification, the rate of convergence and limiting results continue to hold.  相似文献   

11.
This paper proposes a new framework for determining whether a given relationship is nonlinear, what the nonlinearity looks like, and whether it is adequately described by a particular parametric model. The paper studies a regression or forecasting model of the form yt=μ( x t)+εt where the functional form of μ(⋅) is unknown. We propose viewing μ(⋅) itself as the outcome of a random process. The paper introduces a new stationary random field m(⋅) that generalizes finite‐differenced Brownian motion to a vector field and whose realizations could represent a broad class of possible forms for μ(⋅). We view the parameters that characterize the relation between a given realization of m(⋅) and the particular value of μ(⋅) for a given sample as population parameters to be estimated by maximum likelihood or Bayesian methods. We show that the resulting inference about the functional relation also yields consistent estimates for a broad class of deterministic functions μ(⋅). The paper further develops a new test of the null hypothesis of linearity based on the Lagrange multiplier principle and small‐sample confidence intervals based on numerical Bayesian methods. An empirical application suggests that properly accounting for the nonlinearity of the inflation‐unemployment trade‐off may explain the previously reported uneven empirical success of the Phillips Curve.  相似文献   

12.
由于复杂时序存在结构性断点和异常值等问题,往往导致预测模型训练效果不佳,并可能出现极端预测值的情况。为此,本文提出了基于修剪平均的神经网络集成预测方法。该方法首先从训练数据中生成多组训练集,然后分别训练多个神经网络预测模型,最后将多个神经网络的预测结果使用修剪平均策略进行集成。相较于简单平均策略而言,修剪平均策略不容易受到极值的影响,能够使集成模型获得鲁棒性强的预测效果。在实证研究中,本文构造了两种神经网络集成预测模型,分别为基于修剪平均的自举神经网络集成模型(Trimmed Average based Bootstrap Neural Network Ensemble, TA-BNNE)和基于修剪平均的蒙特卡洛神经网络集成模型(Trimmed Average based Monte Carlo Neural Network Ensemble, TA-MCNNE),并采用这两种模型对NN3竞赛数据集进行预测,结果表明在常规和复杂数据集上,修剪平均策略比简单平均策略具有更好的预测精度。此外,本文将所提出的集成模型与NN3的前十名模型进行比较,发现两种模型在全部数据集上均超过了第6名,在复杂数据集上的表现均超过了第1名,进一步验证本文所提方法的有效性。  相似文献   

13.
We study the probabilistic model in the key tree management problem. Users have different behaviors. Normal users have probability p to issue join/leave request while the loyal users have probability zero. Given the numbers of such users, our objective is to construct a key tree with minimum expected updating cost. We observe that a single LUN (Loyal User Node) is enough to represent all loyal users. When 1−p≤0.57 we prove that the optimal tree that minimizes the cost is a star. When 1−p>0.57, we try to bound the size of the subtree rooted at every non-root node. Based on the size bound, we construct the optimal tree using dynamic programming algorithm in O(nK+K 4) time where K=min {4(log (1−p)−1)−1,n} and n is the number of normal users.  相似文献   

14.
考虑影响因素的隐马尔可夫模型在经济预测中的应用   总被引:2,自引:1,他引:1  
定量预测方法分为因果预测法和时间序列预测法,因果预测法利用预测变量与其他变量之间的因果关系进行预测,时间序列预测法是根据预测变量历史数据的结构推断其未来值。由于因果预测法只利用某个变量与其他变量之间的因果关系,但缺少描述变量自身时间序列结构的功能;而时间序列预测法只能描述变量自身序列的结构,但没有考虑其他相关因素的影响,因此本文提出基于观测向量序列的隐马尔可夫模型(HMM)预测方法,该方法能同时考虑变量自身序列结构以及相关因素的影响。首先介绍HMM基本理论;其次,在模型训练、隐状态序列估计的基础上,提出基于观测向量序列HMM预测算法;最后分别进行仿真实验和实证研究,结果表明该方法的有效性。  相似文献   

15.
We provide a tractable characterization of the sharp identification region of the parameter vector θ in a broad class of incomplete econometric models. Models in this class have set‐valued predictions that yield a convex set of conditional or unconditional moments for the observable model variables. In short, we call these models with convex moment predictions. Examples include static, simultaneous‐move finite games of complete and incomplete information in the presence of multiple equilibria; best linear predictors with interval outcome and covariate data; and random utility models of multinomial choice in the presence of interval regressors data. Given a candidate value for θ, we establish that the convex set of moments yielded by the model predictions can be represented as the Aumann expectation of a properly defined random set. The sharp identification region of θ, denoted ΘI, can then be obtained as the set of minimizers of the distance from a properly specified vector of moments of random variables to this Aumann expectation. Algorithms in convex programming can be exploited to efficiently verify whether a candidate θ is in ΘI. We use examples analyzed in the literature to illustrate the gains in identification and computational tractability afforded by our method.  相似文献   

16.
We consider the situation when there is a large number of series, N, each with T observations, and each series has some predictive ability for some variable of interest. A methodology of growing interest is first to estimate common factors from the panel of data by the method of principal components and then to augment an otherwise standard regression with the estimated factors. In this paper, we show that the least squares estimates obtained from these factor‐augmented regressions are consistent and asymptotically normal if . The conditional mean predicted by the estimated factors is consistent and asymptotically normal. Except when T/N goes to zero, inference should take into account the effect of “estimated regressors” on the estimated conditional mean. We present analytical formulas for prediction intervals that are valid regardless of the magnitude of N/T and that can also be used when the factors are nonstationary.  相似文献   

17.
We study a joint capacity leasing and demand acceptance problem in intermodal transportation. The model features multiple sources of evolving supply and demand, and endogenizes the interplay of three levers—forecasting, leasing, and demand acceptance. We characterize the optimal policy, and show how dynamic forecasting coordinates leasing and acceptance. We find (i) the value of dynamic forecasting depends critically on scarcity, stochasticity, and volatility; (ii) traditional mean‐value equivalence approach performs poorly in volatile intermodal context; (iii) mean‐value‐based forecast may outperform stationary distribution‐based forecast. Our work enriches revenue management models and applications. It advances our understanding on when and how to use dynamic forecasting in intermodal revenue management.  相似文献   

18.
《Omega》1987,15(2):145-155
Stock market efficiency is a crucial concept when forecasting of future stock price behaviour is discussed. In the literature, a distinction is made between three potential levels of efficiency. Under a weak form of efficiency, information on historical price movements is of no value for predicting the future price development. Similarly, a semi-strong form of efficiency holds that no publicly available information can be successfully used in the prediction of prices. And finally, a strong form of efficiency means that the share prices fully reflect all relevant information including data not yet publicly available. Stock market efficiency has been extensively studied in different countries. On a thin security market, like in the Helsinki Stock Exchange, many anomalies and deviations from market efficiency have been obtained. This paper aims to contribute to that discussion. It is shown in the paper that both the monthly and quarterly stock market prices (the general stock market index) can be adequately forecasted using either univariate time-series analysis or multivariate econometric modelling. The univariate ARIMA-models seem to be slightly outperformed by the econometric models. It is further shown that the forecasting accuracy of the models can be improved when time-series and econometric forecasts are combined into a composite forecast. The empirical results obtained indicate an absence of efficiency on the Finnish security market.  相似文献   

19.
This paper presents a solution to an important econometric problem, namely the root n consistent estimation of nonlinear models with measurement errors in the explanatory variables, when one repeated observation of each mismeasured regressor is available. While a root n consistent estimator has been derived for polynomial specifications (see Hausman, Ichimura, Newey, and Powell (1991)), such an estimator for general nonlinear specifications has so far not been available. Using the additional information provided by the repeated observation, the suggested estimator separates the measurement error from the “true” value of the regressors thanks to a useful property of the Fourier transform: The Fourier transform converts the integral equations that relate the distribution of the unobserved “true” variables to the observed variables measured with error into algebraic equations. The solution to these equations yields enough information to identify arbitrary moments of the “true,” unobserved variables. The value of these moments can then be used to construct any estimator that can be written in terms of moments, including traditional linear and nonlinear least squares estimators, or general extremum estimators. The proposed estimator is shown to admit a representation in terms of an influence function, thus establishing its root n consistency and asymptotic normality. Monte Carlo evidence and an application to Engel curve estimation illustrate the usefulness of this new approach.  相似文献   

20.
Cointegrated bivariate nonstationary time series are considered in a fractional context, without allowance for deterministic trends. Both the observable series and the cointegrating error can be fractional processes. The familiar situation in which the respective integration orders are 1 and 0 is nested, but these values have typically been assumed known. We allow one or more of them to be unknown real values, in which case Robinson and Marinucci (2001, 2003) have justified least squares estimates of the cointegrating vector, as well as narrow‐band frequency‐domain estimates, which may be less biased. While consistent, these estimates do not always have optimal convergence rates, and they have nonstandard limit distributional behavior. We consider estimates formulated in the frequency domain, that consequently allow for a wide variety of (parametric) autocorrelation in the short memory input series, as well as time‐domain estimates based on autoregressive transformation. Both can be interpreted as approximating generalized least squares and Gaussian maximum likelihood estimates. The estimates share the same limiting distribution, having mixed normal asymptotics (yielding Wald test statistics with χ2 null limit distributions), irrespective of whether the integration orders are known or unknown, subject in the latter case to their estimation with adequate rates of convergence. The parameters describing the short memory stationary input series are √n‐consistently estimable, but the assumptions imposed on these series are much more general than ones of autoregressive moving average type. A Monte Carlo study of finite‐sample performance is included.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号