首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 218 毫秒
1.
This paper considers regression models for cross‐section data that exhibit cross‐section dependence due to common shocks, such as macroeconomic shocks. The paper analyzes the properties of least squares (LS) estimators in this context. The results of the paper allow for any form of cross‐section dependence and heterogeneity across population units. The probability limits of the LS estimators are determined, and necessary and sufficient conditions are given for consistency. The asymptotic distributions of the estimators are found to be mixed normal after recentering and scaling. The t, Wald, and F statistics are found to have asymptotic standard normal, χ2, and scaled χ2 distributions, respectively, under the null hypothesis when the conditions required for consistency of the parameter under test hold. However, the absolute values of t, Wald, and F statistics are found to diverge to infinity under the null hypothesis when these conditions fail. Confidence intervals exhibit similarly dichotomous behavior. Hence, common shocks are found to be innocuous in some circumstances, but quite problematic in others. Models with factor structures for errors and regressors are considered. Using the general results, conditions are determined under which consistency of the LS estimators holds and fails in models with factor structures. The results are extended to cover heterogeneous and functional factor structures in which common factors have different impacts on different population units.  相似文献   

2.
This paper presents a new approach to estimation and inference in panel data models with a general multifactor error structure. The unobserved factors and the individual‐specific errors are allowed to follow arbitrary stationary processes, and the number of unobserved factors need not be estimated. The basic idea is to filter the individual‐specific regressors by means of cross‐section averages such that asymptotically as the cross‐section dimension (N) tends to infinity, the differential effects of unobserved common factors are eliminated. The estimation procedure has the advantage that it can be computed by least squares applied to auxiliary regressions where the observed regressors are augmented with cross‐sectional averages of the dependent variable and the individual‐specific regressors. A number of estimators (referred to as common correlated effects (CCE) estimators) are proposed and their asymptotic distributions are derived. The small sample properties of mean group and pooled CCE estimators are investigated by Monte Carlo experiments, showing that the CCE estimators have satisfactory small sample properties even under a substantial degree of heterogeneity and dynamics, and for relatively small values of N and T.  相似文献   

3.
Heretofore, the Poisson and the Laplace distributions have been used to model demand during lead time for slow-moving items. In this paper, we present a Poisson-like distribution called the Hermite. The advantage of the Hermite is that it is as simple to use as the Poisson and the Laplace are. Moreover, the Hermite is the exact distribution of demand during lead time when unit demand is Poisson, P(Λ), and lead time is normally distributed, N(μ, σ2), so long as (μ/σ2)≥Λ. Thus, the Hermite can enhance the accuracy of analysis as well as add to the tools available to the analyst.  相似文献   

4.
This study presents a new robust estimation method that can produce a regression median hyper-plane for any data set. The robust method starts with dual variables obtained by least absolute value estimation. It then utilizes two specially designed goal programming models to obtain regression median estimators that are less sensitive to a small sample size and a skewed error distribution than least absolute value estimators. The superiority of new robust estimators over least absolute value estimators is confirmed by two illustrative data sets and a Monte Carlo simulation study.  相似文献   

5.
Matching estimators for average treatment effects are widely used in evaluation research despite the fact that their large sample properties have not been established in many cases. The absence of formal results in this area may be partly due to the fact that standard asymptotic expansions do not apply to matching estimators with a fixed number of matches because such estimators are highly nonsmooth functionals of the data. In this article we develop new methods for analyzing the large sample properties of matching estimators and establish a number of new results. We focus on matching with replacement with a fixed number of matches. First, we show that matching estimators are not N1/2‐consistent in general and describe conditions under which matching estimators do attain N1/2‐consistency. Second, we show that even in settings where matching estimators are N1/2‐consistent, simple matching estimators with a fixed number of matches do not attain the semiparametric efficiency bound. Third, we provide a consistent estimator for the large sample variance that does not require consistent nonparametric estimation of unknown functions. Software for implementing these methods is available in Matlab, Stata, and R.  相似文献   

6.
Previous research has indicated that minimum absolute deviations (MAD) estimators tend to be more efficient than ordinary least squares (OLS) estimators in the presence of large disturbances. Via Monte Carlo sampling this study investigates cases in which disturbances are normally distributed with constant variance except for one or more outliers whose disturbances are taken from a normal distribution with a much larger variance. It is found that MAD estimation retains its advantage over OLS through a wide range of conditions, including variations in outlier variance, number of regressors, number of observations, design matrix configuration, and number of outliers. When no outliers are present, the efficiency of MAD estimators relative to OLS exhibits remarkably slight variation.  相似文献   

7.
ARCH and GARCH models directly address the dependency of conditional second moments, and have proved particularly valuable in modelling processes where a relatively large degree of fluctuation is present. These include financial time series, which can be particularly heavy tailed. However, little is known about properties of ARCH or GARCH models in the heavy–tailed setting, and no methods are available for approximating the distributions of parameter estimators there. In this paper we show that, for heavy–tailed errors, the asymptotic distributions of quasi–maximum likelihood parameter estimators in ARCH and GARCH models are nonnormal, and are particularly difficult to estimate directly using standard parametric methods. Standard bootstrap methods also fail to produce consistent estimators. To overcome these problems we develop percentile–t, subsample bootstrap approximations to estimator distributions. Studentizing is employed to approximate scale, and the subsample bootstrap is used to estimate shape. The good performance of this approach is demonstrated both theoretically and numerically.  相似文献   

8.
The Lp-min increment fit and Lp-min increment ultrametric fit problems are two popular optimization problems arising from distance methods for reconstructing phylogenetic trees. This paper proves1. An O(n2) algorithm for approximating L -min increment fit within ratio 3.2. A ratio-O n 1/p polynomial time approximation to Lp-min increment ultrametric fit.3. The neighbor-joining algorithm can correctly reconstruct a phylogenetic tree T when increment errors are small enough under L -norm.  相似文献   

9.
We develop results for the use of Lasso and post‐Lasso methods to form first‐stage predictions and estimate optimal instruments in linear instrumental variables (IV) models with many instruments, p. Our results apply even when p is much larger than the sample size, n. We show that the IV estimator based on using Lasso or post‐Lasso in the first stage is root‐n consistent and asymptotically normal when the first stage is approximately sparse, that is, when the conditional expectation of the endogenous variables given the instruments can be well‐approximated by a relatively small set of variables whose identities may be unknown. We also show that the estimator is semiparametrically efficient when the structural error is homoscedastic. Notably, our results allow for imperfect model selection, and do not rely upon the unrealistic “beta‐min” conditions that are widely used to establish validity of inference following model selection (see also Belloni, Chernozhukov, and Hansen (2011b)). In simulation experiments, the Lasso‐based IV estimator with a data‐driven penalty performs well compared to recently advocated many‐instrument robust procedures. In an empirical example dealing with the effect of judicial eminent domain decisions on economic outcomes, the Lasso‐based IV estimator outperforms an intuitive benchmark. Optimal instruments are conditional expectations. In developing the IV results, we establish a series of new results for Lasso and post‐Lasso estimators of nonparametric conditional expectation functions which are of independent theoretical and practical interest. We construct a modification of Lasso designed to deal with non‐Gaussian, heteroscedastic disturbances that uses a data‐weighted 1‐penalty function. By innovatively using moderate deviation theory for self‐normalized sums, we provide convergence rates for the resulting Lasso and post‐Lasso estimators that are as sharp as the corresponding rates in the homoscedastic Gaussian case under the condition that logp = o(n1/3). We also provide a data‐driven method for choosing the penalty level that must be specified in obtaining Lasso and post‐Lasso estimates and establish its asymptotic validity under non‐Gaussian, heteroscedastic disturbances.  相似文献   

10.
本文针对传统利率期限结构拟合曲线存在过度波动问题,构建定价误差绝对距离和波动曲率双重最优化模型,借助对偶几何程序转换为在线性约束区域内的绝对距离最小化问题,并运用负指数平滑立方L1样条和计算几何逼近算法求解模型参数,通过负指数立方L1样条、NSS模型和B样条进行样本内拟合与样本外预测能力的比较,证实负指数立方L1平滑样条对利率期限结构波动的定价精确度、结构性拟合和样本外预测能力均有明显的优势,丰富了国债市场利率期限结构波动与定价的理论基础和研究方法。  相似文献   

11.
This paper proposes a new nested algorithm (NPL) for the estimation of a class of discrete Markov decision models and studies its statistical and computational properties. Our method is based on a representation of the solution of the dynamic programming problem in the space of conditional choice probabilities. When the NPL algorithm is initialized with consistent nonparametric estimates of conditional choice probabilities, successive iterations return a sequence of estimators of the structural parameters which we call K–stage policy iteration estimators. We show that the sequence includes as extreme cases a Hotz–Miller estimator (for K=1) and Rust's nested fixed point estimator (in the limit when K→∞). Furthermore, the asymptotic distribution of all the estimators in the sequence is the same and equal to that of the maximum likelihood estimator. We illustrate the performance of our method with several examples based on Rust's bus replacement model. Monte Carlo experiments reveal a trade–off between finite sample precision and computational cost in the sequence of policy iteration estimators.  相似文献   

12.
Physiological daily inhalation rates reported in our previous study for normal‐weight subjects 2.6–96 years old were compared to inhalation data determined in free‐living overweight/obese individuals (n = 661) aged 5–96 years. Inhalation rates were also calculated in normal‐weight (n = 408), overweight (n = 225), and obese classes 1, 2, and 3 adults (n = 134) aged 20–96 years. These inhalation values were based on published indirect calorimetry measurements (n = 1,069) and disappearance rates of oral doses of water isotopes (i.e., 2H2O and H218O) monitored by gas isotope ratio mass spectrometry usually in urine samples for an aggregate period of over 16,000 days. Ventilatory equivalents for overweight/obese subjects at rest and during their aggregate daytime activities (28.99 ± 6.03 L to 34.82 ± 8.22 L of air inhaled/L of oxygen consumed; mean ±  SD) were determined and used for calculations of inhalation rates. The interindividual variability factor calculated as the ratio of the highest 99th percentile to the lowest 1st percentile of daily inhalation rates is higher for absolute data expressed in m3/day (26.7) compared to those of data in m3/kg‐day (12.2) and m3/m2‐day (5.9). Higher absolute rates generally found in overweight/obese individuals compared to their normal‐weight counterparts suggest higher intakes of air pollutants (in μg/day) for the former compared to the latter during identical exposure concentrations and conditions. Highest absolute mean (24.57 m3/day) and 99th percentile (55.55 m3/day) values were found in obese class 2 adults. They inhale on average 8.21 m3 more air per day than normal‐weight adults.  相似文献   

13.
The problem of estimating steady state absorption probabilities for first order stationary Markov chains having a finite state space is examined. As model parameters, these probabilities are analytic functions of transition probabilities Q and R, and they can be represented as P= (I-Q)-1R. Estimators may be obtained by replacing the transition probabilities by their maximum likelihood estimators Q and Ř under multinomial theory. Using large sample multivariate normal theory, one can derive the asymptotic distribution of these estimators and can obtain large sample confidence intervals. Finally, an application related to estimating loss reserves for an installment loan portfolio assumed to satisfy a Markov chain is discussed.  相似文献   

14.
This paper presents a solution to an important econometric problem, namely the root n consistent estimation of nonlinear models with measurement errors in the explanatory variables, when one repeated observation of each mismeasured regressor is available. While a root n consistent estimator has been derived for polynomial specifications (see Hausman, Ichimura, Newey, and Powell (1991)), such an estimator for general nonlinear specifications has so far not been available. Using the additional information provided by the repeated observation, the suggested estimator separates the measurement error from the “true” value of the regressors thanks to a useful property of the Fourier transform: The Fourier transform converts the integral equations that relate the distribution of the unobserved “true” variables to the observed variables measured with error into algebraic equations. The solution to these equations yields enough information to identify arbitrary moments of the “true,” unobserved variables. The value of these moments can then be used to construct any estimator that can be written in terms of moments, including traditional linear and nonlinear least squares estimators, or general extremum estimators. The proposed estimator is shown to admit a representation in terms of an influence function, thus establishing its root n consistency and asymptotic normality. Monte Carlo evidence and an application to Engel curve estimation illustrate the usefulness of this new approach.  相似文献   

15.
We consider semiparametric estimation of the memory parameter in a model that includes as special cases both long‐memory stochastic volatility and fractionally integrated exponential GARCH (FIEGARCH) models. Under our general model the logarithms of the squared returns can be decomposed into the sum of a long‐memory signal and a white noise. We consider periodogram‐based estimators using a local Whittle criterion function. We allow the optional inclusion of an additional term to account for possible correlation between the signal and noise processes, as would occur in the FIEGARCH model. We also allow for potential nonstationarity in volatility by allowing the signal process to have a memory parameter d*1/2. We show that the local Whittle estimator is consistent for d*∈(0,1). We also show that the local Whittle estimator is asymptotically normal for d*∈(0,3/4) and essentially recovers the optimal semiparametric rate of convergence for this problem. In particular, if the spectral density of the short‐memory component of the signal is sufficiently smooth, a convergence rate of n2/5−δ for d*∈(0,3/4) can be attained, where n is the sample size and δ>0 is arbitrarily small. This represents a strong improvement over the performance of existing semiparametric estimators of persistence in volatility. We also prove that the standard Gaussian semiparametric estimator is asymptotically normal if d*=0. This yields a test for long memory in volatility.  相似文献   

16.
The problem of estimating delays experienced by customers with different priorities, and the determination of the appropriate delay announcement to these customers, in a multi‐class call center with time varying parameters, abandonments, and retrials is considered. The system is approximately modeled as an M(t)/M/s(t) queue with priorities, thus ignoring some of the real features like abandonments and retrials. Two delay estimators are proposed and tested in a series of simulation experiments. Making use of actual state‐dependent waiting time data from this call center, the delay announcements from the estimated delay distributions that minimize a newsvendor‐like cost function are considered. The performance of these announcements is also compared to announcing the mean delay. We find that an Erlang distribution‐based estimator performs well for a range of different under‐announcement penalty to over‐announcement penalty ratios.  相似文献   

17.
Given a population of cardinality q r that contains a positive subset P of cardinality p, we give a trivial two-stage method that has first stage pools each of which contains q r – 2 objects. We assume that errors occur in the first stage. We give an algorithm that uses the results of first stage to generate a set CP of candidate positives with |CP| (r + 1)q. We give the expected value of |CPP|. At most (r + 1)q trivial second stage tests are needed to identify all the positives in CP. We assume that the second stage tests are error free.  相似文献   

18.
The availability of high frequency financial data has generated a series of estimators based on intra‐day data, improving the quality of large areas of financial econometrics. However, estimating the standard error of these estimators is often challenging. The root of the problem is that traditionally, standard errors rely on estimating a theoretically derived asymptotic variance, and often this asymptotic variance involves substantially more complex quantities than the original parameter to be estimated. Standard errors are important: they are used to assess the precision of estimators in the form of confidence intervals, to create “feasible statistics” for testing, to build forecasting models based on, say, daily estimates, and also to optimize the tuning parameters. The contribution of this paper is to provide an alternative and general solution to this problem, which we call Observed Asymptotic Variance. It is a general nonparametric method for assessing asymptotic variance (AVAR). It provides consistent estimators of AVAR for a broad class of integrated parameters Θ = ∫ θt dt, where the spot parameter process θ can be a general semimartingale, with continuous and jump components. The observed AVAR is implemented with the help of a two‐scales method. Its construction works well in the presence of microstructure noise, and when the observation times are irregular or asynchronous in the multivariate case. The methodology is valid for a wide variety of estimators, including the standard ones for variance and covariance, and also for more complex estimators, such as, of leverage effects, high frequency betas, and semivariance.  相似文献   

19.
We consider model based inference in a fractionally cointegrated (or cofractional) vector autoregressive model, based on the Gaussian likelihood conditional on initial values. We give conditions on the parameters such that the process Xt is fractional of order d and cofractional of order db; that is, there exist vectors β for which βXt is fractional of order db and no other fractionality order is possible. For b=1, the model nests the I(d−1) vector autoregressive model. We define the statistical model by 0 < bd, but conduct inference when the true values satisfy 0d0b0<1/2 and b0≠1/2, for which β0Xt is (asymptotically) a stationary process. Our main technical contribution is the proof of consistency of the maximum likelihood estimators. To this end, we prove weak convergence of the conditional likelihood as a continuous stochastic process in the parameters when errors are independent and identically distributed with suitable moment conditions and initial values are bounded. Because the limit is deterministic, this implies uniform convergence in probability of the conditional likelihood function. If the true value b0>1/2, we prove that the limit distribution of is mixed Gaussian, while for the remaining parameters it is Gaussian. The limit distribution of the likelihood ratio test for cointegration rank is a functional of fractional Brownian motion of type II. If b0<1/2, all limit distributions are Gaussian or chi‐squared. We derive similar results for the model with d = b, allowing for a constant term.  相似文献   

20.
Fixed effects estimators of panel models can be severely biased because of the well‐known incidental parameters problem. We show that this bias can be reduced by using a panel jackknife or an analytical bias correction motivated by large T. We give bias corrections for averages over the fixed effects, as well as model parameters. We find large bias reductions from using these approaches in examples. We consider asymptotics where T grows with n, as an approximation to the properties of the estimators in econometric applications. We show that if T grows at the same rate as n, the fixed effects estimator is asymptotically biased, so that asymptotic confidence intervals are incorrect, but that they are correct for the panel jackknife. We show T growing faster than n1/3 suffices for correctness of the analytic correction, a property we also conjecture for the jackknife.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号