首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 312 毫秒
1.
Many of the recently developed alternative econometric approaches to the construction and estimation of life-cycle consistent models using individual data can be viewed as alternative choices for conditioning variables that summarise past decisions and future anticipations. By ingenious choice of this conditioning variable and by exploitation of the duality relationships between the alternative specifications, many currently available micro-data sets can be used for the estimation of life-cycle consistent models. In reviewing the alternative approaches their stochastic properties and implict preference restrictions are highlighted. Indeed, empirical specifications that are parameterised in a form of direct theoretical interest often can be shown to be unnecessarily restrictive while dual representations may provide more flexible econometric models. These results indicate the particular advantages of different types of data in retrieving life-cycle consistent preference parameters and the appropriate, most flexible, econometric approach for each type of data. A methodology for relaxing the intertemporal separability assumption is developed and the advantages and disadvantages of alternative approaches in this framework are considered.  相似文献   

2.
Reply     
Many of the recently developed alternative ecocometric approaches to the construction and estimation of life-cycle consistent models using individual data can be viewed as alternative choices for conditioning variables that summarise past decisions and future anticipations. By ingenious choice of this conditioning variable and by exploitation of the duality relationships between the alternative specifications, many currently available micro-data sets can be used for the estimation of life-cycle consistent models. In reviewing the alternative approaches their stochastic properties and implicit preference restrictions are highlighted. Indeed, empirical specifications that are parameterised in a form of direct theoretical interest often can be shown to be unnecessarily restrictive while dual representations may provide more flexible econometric models. These results indicate the particular advantages of different types of data in retrieving life-cycle consistent preference parameters and the appropriate, most flexible, econometric approach for each type of data. A methodology for relaxing the intertemporal separability assumption is developed and the advantages and disadvantages of alternative approaches in this framework are considered.  相似文献   

3.
Many of the recently developed alternative ecocometric approaches to the construction and estimation of life-cycle consistent models using individual data can be viewed as alternative choices for conditioning variables that summarise past decisions and future anticipations. By ingenious choice of this conditioning variable and by exploitation of the duality relationships between the alternative specifications, many currently available micro-data sets can be used for the estimation of life-cycle consistent models. In reviewing the alternative approaches their stochastic properties and implicit preference restrictions are highlighted. Indeed, empirical specifications that are parameterised in a form of direct theoretical interest often can be shown to be unnecessarily restrictive while dual representations may provide more flexible econometric models. These results indicate the particular advantages of different types of data in retrieving life-cycle consistent preference parameters and the appropriate, most flexible, econometric approach for each type of data. A methodology for relaxing the intertemporal separability assumption is developed and the advantages and disadvantages of alternative approaches in this framework are considered.  相似文献   

4.
Personal consumption expenditures (PCE) in the National Income and Product Accounts are often used to investigate whether the time series properties of consumption are consistent with the permanent-income/life-cycle hypotheses. In this article, I address the issue of the general quality of the PCE data and its definitional consistency with the typical model of the intertemporal allocation of consumption. I find that, in terms of the population coverage and the consumption concept, the raw PCE data are unsuitable for the analysis of the permanent-income/life-cycle hypotheses. More fundamentally, adjustments to the data to provide greater consistency with the theory alter critical conclusions concerning the time series properties of consumption.  相似文献   

5.
Summary.  The paper investigates the life-cycle relationship of work and family life in Britain based on the British Household Panel Survey. Using hazard regression techniques we estimate a five-equation model, which includes birth events, union formation, union dissolution, employment and non-employment events. We find that transitions in and out of employment for men are relatively independent of other transitions. In contrast, there are strong links between employment of females, having children and union formation. By undertaking a detailed microsimulations analysis, we show that different levels of labour force participation by females do not necessarily lead to large changes in fertility events. Changes in union formation and fertility events, in contrast, have larger effects on employment.  相似文献   

6.
杭斌 《统计研究》2007,24(2):38-43
 摘  要:标准生命周期消费理论假定消费者有能力求解复杂的动态优化问题,这一假定至少在中国是不能成立的。本文从中国实际出发,提出了关于中国城市居民消费行为的基本假设。即由于存在信贷约束和消费支出高峰,中国城市居民跨时消费的理性选择是:在避免未来发生流动性约束的前提下尽可能平滑各个时期的消费。与之相对应,本文假定中国城市居民跨时消费决策的主要依据是财富目标和持久收入,并在此基础上构建了经济计量模型。实证分析的主要结论是:(1)1990年以来,随着城市居民财富目标的不断提高,持久收入的边际消费倾向呈持续下降趋势。(2)中国城市居民的消费行为的确存在一个学习和适应过程。  相似文献   

7.
于洪霞 《统计研究》2015,32(5):56-63
有很多使用长期面板数据的研究指出,当期收入与终身收入并不平行,表明教育收益率在生命中的不同时期可能是有差异的。认识教育收益率的异质性是有效制定相关政策的基础,也是研究领域非常关注的问题,但是很少有研究分析教育收益率在生命周期中的异质性。本研究使用中国家庭健康与营养调查面板数据以及多水平分析方法,探讨了教育收益率在生命周期中的变动轨迹,并进行了性别差异分析。研究发现:教育收益率在整个生命周期中呈现先上升后下降的倒U型分布,在初期为负值;在生命周期前期女性的教育收益率大于男性,后期是男性大于女性;教育水平越高,收入增长所持续的时期越长。  相似文献   

8.
In this paper, we consider an inspection policy problem for a one-shot system with two types of units over a finite time span and want to determine inspection intervals optimally with given replacement points of Type 2 units. The interval availability and life cycle cost are used as optimization criteria and estimated by simulation. Two optimization models are proposed to find the optimal inspection intervals for the exponential and general distributions. A heuristic method and a genetic algorithm are proposed to find the near-optimal inspection intervals, to satisfy the target interval availability and minimize the life-cycle cost. We study numerical examples to compare the heuristic method with the genetic algorithm and investigate the effect of model parameters to the optimal solutions.  相似文献   

9.
Summary. Political partisanship is often claimed to be influenced by generational and life-cycle processes, with both being cited as the factor that is responsible for higher levels of Conservative identifications among older voters. Given the existence of over-time change it is difficult to assess the validity of these claims as even with repeated survey data any model is underidentified. This paper uses smoothed additive models to isolate and examine the non-linear component of the generational effect. Some identifying assumptions are presented to try to assess the extent to which linear aging or generational processes are responsible for the increased Conservatism of the elderly. The advantage of the smoothed additive models is their ability to highlight non-linear effects, however, and this paper shows that regardless of linear trends people who entered the electorate during Conservative Parliaments are more likely to be Conservative partisan identifiers many years later. The introduction of a multiplicative term linking age to period effects supports this hypothesis by showing that younger people are more susceptible to the influence of period effects.  相似文献   

10.
中国煤电能源链的生命周期碳排放系数计量   总被引:10,自引:0,他引:10  
 针对我国煤电高污染的行业背景和电力行业低碳化发展的时代要求,文章应用全生命周期分析方法建立了我国煤电能源链的碳排放计量总模型和各环节的子计量模型,进而通过详细计算得出了我国燃煤电厂单位发电量引致的子环节 当量排放及煤电能源链 当量的总排放数据,对比发现了燃煤发电环节为我国煤电能源链温室气体排放的主要环节,最后对各环节的排放结果作出了综合评价与解释。研究对增进了解我国煤电能源链各单元过程温室气体的产生来源和大小,明晰减排调控的重点方向,实现我国电力行业的低碳化发展,都具有一定的理论和现实意义。  相似文献   

11.
Crude oil and natural gas depletion may be modelled by a diffusion process based upon a constrained life-cycle. Here we consider the Generalized Bass Model. The choice is motivated by the realistic assumption that there is a self-evident link between oil and gas extraction and the spreading of the modern technologies in wide areas such as transport, heating, cooling, chemistry and hydrocarbon fuels consumption. Such a model may include deterministic or semi-deterministic regulatory interventions. Statistical analysis is based upon nonlinear methodologies and more flexible autoregressive structure of residuals. The technical aim of this paper is to outline the meaningful hierarchy existing among the components of such diffusion models. Statistical effort in residual component analysis may be read as a significant confirmation of a well-founded diffusion process under rare but strong deterministic shocks. Applications of such ideas are proposed with reference to world oil and gas production data and to particular regions such as mainland U.S.A., U.K., Norway and Alaska. The main results give new evidence in time-peaks location and in residual times to depletion.  相似文献   

12.
Data in many experiments arises as curves and therefore it is natural to use a curve as a basic unit in the analysis, which is in terms of functional data analysis (FDA). Functional curves are encountered when units are observed over time. Although the whole function curve itself is not observed, a sufficiently large number of evaluations, as is common with modern recording equipment, is assumed to be available. In this article, we consider the statistical inference for the mean functions in the two samples problem drawn from functional data sets, in which we assume that functional curves are observed, that is, we consider the test if these two groups of curves have the same mean functional curve when the two groups of curves without noise are observed. The L 2-norm based and bootstrap-based test statistics are proposed. It is shown that the proposed methodology is flexible. Simulation study and real-data examples are used to illustrate our techniques.  相似文献   

13.
Suppose that data are generated according to the model f ( y | x ; θ ) g ( x ), where y is a response and x are covariates. We derive and compare semiparametric likelihood and pseudolikelihood methods for estimating θ for situations in which units generated are not fully observed and in which it is impossible or undesirable to model the covariate distribution. The probability that a unit is fully observed may depend on y , and there may be a subset of covariates which is observed only for a subsample of individuals. Our key assumptions are that the probability that a unit has missing data depends only on which of a finite number of strata that ( y , x ) belongs to and that the stratum membership is observed for every unit. Applications include case–control studies in epidemiology, field reliability studies and broad classes of missing data and measurement error problems. Our results make fully efficient estimation of θ feasible, and they generalize and provide insight into a variety of methods that have been proposed for specific problems.  相似文献   

14.
We use a Bayesian multivariate time series model for the analysis of the dynamics of carbon monoxide atmospheric concentrations. The data are observed at four sites. It is assumed that the logarithm of the observed process can be represented as the sum of unobservable components: a trend, a daily periodicity, a stationary autoregressive signal and an erratic term. Bayesian analysis is performed via Gibbs sampling. In particular, we consider the problem of joint temporal prediction when data are observed at a few sites and it is not possible to fit a complex space–time model. A retrospective analysis of the trend component is also given, which is important in that it explains the evolution of the variability in the observed process.  相似文献   

15.
ABSTRACT

This article examines the evidence contained in t statistics that are marginally significant in 5% tests. The bases for evaluating evidence are likelihood ratios and integrated likelihood ratios, computed under a variety of assumptions regarding the alternative hypotheses in null hypothesis significance tests. Likelihood ratios and integrated likelihood ratios provide a useful measure of the evidence in favor of competing hypotheses because they can be interpreted as representing the ratio of the probabilities that each hypothesis assigns to observed data. When they are either very large or very small, they suggest that one hypothesis is much better than the other in predicting observed data. If they are close to 1.0, then both hypotheses provide approximately equally valid explanations for observed data. I find that p-values that are close to 0.05 (i.e., that are “marginally significant”) correspond to integrated likelihood ratios that are bounded by approximately 7 in two-sided tests, and by approximately 4 in one-sided tests.

The modest magnitude of integrated likelihood ratios corresponding to p-values close to 0.05 clearly suggests that higher standards of evidence are needed to support claims of novel discoveries and new effects.  相似文献   

16.
A graphical procedure for the display of treatment means that enables one to determine the statistical significance of the observed differences is presented. It is shown that the widely used least significant difference and honestly significant difference statistics can be used to construct plots in which any two means whose uncertainty intervals do not overlap are significantly different at the assigned probability level. It is argued that these plots, because of their straightforward decision rules, are more effective than those that show the observed means with standard errors or confidence limits. Several examples of the proposed displays are included to illustrate the procedure.  相似文献   

17.
Summary.  Previous research has proposed a design-based analysis procedure for experiments that are embedded in complex sampling designs in which the ultimate sampling units of an on-going sample survey are randomized over different treatments according to completely randomized designs or randomized block designs. Design-based Wald and t -statistics are applied to test whether sample means that are observed under various survey implementations are significantly different. This approach is generalized to experimental designs in which clusters of sampling units are randomized over the different treatments. Furthermore, test statistics are derived to test differences between ratios of two sample estimates that are observed under alternative survey implementations. The methods are illustrated with a simulation study and real life applications of experiments that are embedded in the Dutch Labour Force Survey. The functionality of a software package that was developed to conduct these analyses is described.  相似文献   

18.
In dealing with ties in failure time data the mechanism by which the data are observed should be considered. If the data are discrete, the process is relatively simple and is determined by what is actually observed. With continuous data, ties are not supposed to occur, but they do because the data are grouped into intervals (even if only rounding intervals). In this case there is actually a non–identifiability problem which can only be resolved by modelling the process. Various reasonable modelling assumptions are investigated in this paper. They lead to better ways of dealing with ties between observed failure times and censoring times of different individuals. The current practice is to assume that the censoring times occur after all the failures with which they are tied.  相似文献   

19.
A multitype epidemic model is analysed assuming proportionate mixing between types. Estimation procedures for the susceptibilities and infectivities are derived for three sets of data: complete data, meaning that the whole epidemic process is observed continuously; the removal processes are observed continuously; only the final state is observed. Under the assumption of a major outbreak in a population of size n it is shown that, for all three data sets, the susceptibility estimators are always efficient, i.e. consistent with a √ n rate of convergence. The infectivity estimators are 'in most cases' respectively efficient, efficient and unidentifiable. However, if some susceptibilities are equal then the corresponding infectivity estimators are respectively barely consistent (√log( n ) rate of convergence), not consistent and unidentifiable. The estimators are applied to simulated data.  相似文献   

20.
When data are missing, analyzing records that are completely observed may cause bias or inefficiency. Existing approaches in handling missing data include likelihood, imputation and inverse probability weighting. In this paper, we propose three estimators inspired by deleting some completely observed data in the regression setting. First, we generate artificial observation indicators that are independent of outcome given the observed data and draw inferences conditioning on the artificial observation indicators. Second, we propose a closely related weighting method. The proposed weighting method has more stable weights than those of the inverse probability weighting method (Zhao, L., Lipsitz, S., 1992. Designs and analysis of two-stage studies. Statistics in Medicine 11, 769–782). Third, we improve the efficiency of the proposed weighting estimator by subtracting the projection of the estimating function onto the nuisance tangent space. When data are missing completely at random, we show that the proposed estimators have asymptotic variances smaller than or equal to the variance of the estimator obtained from using completely observed records only. Asymptotic relative efficiency computation and simulation studies indicate that the proposed weighting estimators are more efficient than the inverse probability weighting estimators under wide range of practical situations especially when the missingness proportion is large.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号