首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到3条相似文献,搜索用时 0 毫秒
1.
Scanner data are increasingly being used in the calculation of price indexes such as the CPI. The preeminent approach is the RYGEKS method (Ivancic, Diewert and Fox 2011 Ivancic, L., Diewert, W.E., and Fox, K.J. (2011), “Scanner Data, Time Aggregation and the Construction of Price Indexes,” Journal of Econometrics, 161, 2435.[Crossref], [Web of Science ®] [Google Scholar]). This uses multilateral methods to construct price parities across a rolling year then links these to construct a nonrevisable index. While this approach performs well there remain some unresolved issues, in particular; the optimal window length and the linking method. In this note, these questions are addressed. A novel linking method is proposed along with the use of weighted GEKS as opposed to a fixed window. These approaches are illustrated empirically on a large scanner dataset and perform well.  相似文献   

2.
The recently developed rolling year GEKS procedure makes maximum use of all matches in the data to construct nonrevisable price indexes that are approximately free from chain drift. A potential weakness is that unmatched items are ignored. In this article we use imputation Törnqvist price indexes as inputs into the rolling year GEKS procedure. These indexes account for quality changes by imputing the “missing prices” associated with new and disappearing items. Three imputation methods are discussed. The first method makes explicit imputations using a hedonic regression model which is estimated for each time period. The other two methods make implicit imputations; they are based on time dummy hedonic and time-product dummy regression models and are estimated on bilateral pooled data. We present empirical evidence for New Zealand from scanner data on eight consumer electronics products and find that accounting for quality change can make a substantial difference.  相似文献   

3.
This article proposes and evaluates two new methods of reweighting preliminary data to obtain estimates more closely approximating those derived from the final data set. In our motivating example, the preliminary data are an early sample of tax returns, and the final data set is the sample after all tax returns have been processed. The new methods estimate a predicted propensity for late filing for each return in the advance sample and then poststratify based on these propensity scores. Using advance and complete sample data for 1982, we demonstrate that the new methods produce advance estimates generally much closer to the final estimates than those derived from the current advance estimation techniques. The results demonstrate the value of propensity modeling, a general-purpose methodology that can be applied to a wide range of problems, including adjustment for unit nonresponse and frame undercoverage as well as statistical matching.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号