首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 21 毫秒
1.
Erik Herns  Steinar Strm 《LABOUR》1996,10(2):269-296
ABSTRACT: Various unemployment duration models are estimated on a large Norwegian dataset covering labour market history 1.1.1989-31.12.1992 for all persons who became unemployed during October 1990. As many unemployed leave the unemployment register without going directly to a job, two alternative definitions of unemployment are used — register unemployment and joblessness. The problem of heterogeneity is addressed both by partitioning the individuals into four categories by previous unemployment history, and by including a random term in the job hazard. Observed as well as unobserved heterogeneity affects the estimates of expected duration to a great extent. When gamma-distributed unobserved heterogeneity is accounted for, the estimates of duration dependence become more positive relative to models where unobserved heterogeneity is ignored. Among persons who are entitled to unemployment benefit, the duration dependence appears to be significantly positive. Alternative specifications of the baseline hazard hardly affect estimates of the effects of the covariates on duration.  相似文献   

2.
Instrumental variables are widely used in applied econometrics to achieve identification and carry out estimation and inference in models that contain endogenous explanatory variables. In most applications, the function of interest (e.g., an Engel curve or demand function) is assumed to be known up to finitely many parameters (e.g., a linear model), and instrumental variables are used to identify and estimate these parameters. However, linear and other finite‐dimensional parametric models make strong assumptions about the population being modeled that are rarely if ever justified by economic theory or other a priori reasoning and can lead to seriously erroneous conclusions if they are incorrect. This paper explores what can be learned when the function of interest is identified through an instrumental variable but is not assumed to be known up to finitely many parameters. The paper explains the differences between parametric and nonparametric estimators that are important for applied research, describes an easily implemented nonparametric instrumental variables estimator, and presents empirical examples in which nonparametric methods lead to substantive conclusions that are quite different from those obtained using standard, parametric estimators.  相似文献   

3.
This paper presents a new approach to estimation and inference in panel data models with a general multifactor error structure. The unobserved factors and the individual‐specific errors are allowed to follow arbitrary stationary processes, and the number of unobserved factors need not be estimated. The basic idea is to filter the individual‐specific regressors by means of cross‐section averages such that asymptotically as the cross‐section dimension (N) tends to infinity, the differential effects of unobserved common factors are eliminated. The estimation procedure has the advantage that it can be computed by least squares applied to auxiliary regressions where the observed regressors are augmented with cross‐sectional averages of the dependent variable and the individual‐specific regressors. A number of estimators (referred to as common correlated effects (CCE) estimators) are proposed and their asymptotic distributions are derived. The small sample properties of mean group and pooled CCE estimators are investigated by Monte Carlo experiments, showing that the CCE estimators have satisfactory small sample properties even under a substantial degree of heterogeneity and dynamics, and for relatively small values of N and T.  相似文献   

4.
We present estimators for nonparametric functions that are nonadditive in unobservable random terms. The distributions of the unobservable random terms are assumed to be unknown. We show that when a nonadditive, nonparametric function is strictly monotone in an unobservable random term, and it satisfies some other properties that may be implied by economic theory, such as homogeneity of degree one or separability, the function and the distribution of the unobservable random term are identified. We also present convenient normalizations, to use when the properties of the function, other than strict monotonicity in the unobservable random term, are unknown. The estimators for the nonparametric function and for the distribution of the unobservable random term are shown to be consistent and asymptotically normal. We extend the results to functions that depend on a multivariate random term. The results of a limited simulation study are presented.  相似文献   

5.
This paper develops characterizations of identified sets of structures and structural features for complete and incomplete models involving continuous or discrete variables. Multiple values of unobserved variables can be associated with particular combinations of observed variables. This can arise when there are multiple sources of heterogeneity, censored or discrete endogenous variables, or inequality restrictions on functions of observed and unobserved variables. The models generalize the class of incomplete instrumental variable (IV) models in which unobserved variables are single‐valued functions of observed variables. Thus the models are referred to as generalized IV (GIV) models, but there are important cases in which instrumental variable restrictions play no significant role. Building on a definition of observational equivalence for incomplete models the development uses results from random set theory that guarantee that the characterizations deliver sharp bounds, thereby dispensing with the need for case‐by‐case proofs of sharpness. The use of random sets defined on the space of unobserved variables allows identification analysis under mean and quantile independence restrictions on the distributions of unobserved variables conditional on exogenous variables as well as under a full independence restriction. The results are used to develop sharp bounds on the distribution of valuations in an incomplete model of English auctions, improving on the pointwise bounds available until now. Application of many of the results of the paper requires no familiarity with random set theory.  相似文献   

6.
We adapt the expectation–maximization algorithm to incorporate unobserved heterogeneity into conditional choice probability (CCP) estimators of dynamic discrete choice problems. The unobserved heterogeneity can be time‐invariant or follow a Markov chain. By developing a class of problems where the difference in future value terms depends on a few conditional choice probabilities, we extend the class of dynamic optimization problems where CCP estimators provide a computationally cheap alternative to full solution methods. Monte Carlo results confirm that our algorithms perform quite well, both in terms of computational time and in the precision of the parameter estimates.  相似文献   

7.
This paper studies the problem of identification and estimation in nonparametric regression models with a misclassified binary regressor where the measurement error may be correlated with the regressors. We show that the regression function is nonparametrically identified in the presence of an additional random variable that is correlated with the unobserved true underlying variable but unrelated to the measurement error. Identification for semiparametric and parametric regression functions follows straightforwardly from the basic identification result. We propose a kernel estimator based on the identification strategy, derive its large sample properties, and discuss alternative estimation procedures. We also propose a test for misclassification in the model based on an exclusion restriction that is straightforward to implement.  相似文献   

8.
A popular way to account for unobserved heterogeneity is to assume that the data are drawn from a finite mixture distribution. A barrier to using finite mixture models is that parameters that could previously be estimated in stages must now be estimated jointly: using mixture distributions destroys any additive separability of the log‐likelihood function. We show, however, that an extension of the EM algorithm reintroduces additive separability, thus allowing one to estimate parameters sequentially during each maximization step. In establishing this result, we develop a broad class of estimators for mixture models. Returning to the likelihood problem, we show that, relative to full information maximum likelihood, our sequential estimator can generate large computational savings with little loss of efficiency.  相似文献   

9.
Dario Pozzoli 《LABOUR》2009,23(1):131-169
This study is focused on the transition from university to first job, taking into account the graduates’ characteristics and the effects relating to degree subject. A large data set from a survey on job opportunities for the 1998 Italian graduates is used. The paper uses a non‐parametric discrete‐time single‐risk model to study employment hazard. Alternative mixing distributions have also been used to account for unobserved heterogeneity. The results obtained indicate that there is evidence of positive duration dependence after a short initial period of negative duration dependence. In addition, a competing‐risk model has been estimated to characterize transitions out of unemployment.  相似文献   

10.
Michele Lalla 《LABOUR》1995,9(3):481-506
ABSTRACT: The procedure used to analyse a data set which includes only censored or incomplete spells is examined in this paper. First of all, the distributions of incomplete spell durations are analysed without explanatory variables (such as age, gender, and so on), assuming that unobserved completed spells have a Weibull distribution. The relationships between the mean of the incomplete spells and the mean of the completed spells are reported for first-job seekers, unemployed, employed and self-employed workers. Given that the unobserved completed spells are Weibull-distributed, the unobserved heterogeneity is introduced on the scale parameter of the Weibull. The heterogeneity, considered as a variable, is analysed for a binomial or Weibull distribution. As the beginning of a spell is a retrospective datum, the recall errors are modelled including the heaping effect. Using some proportional hazards models, the methodology to study the influences of explanatory variables on spell distributions is then described, once again including both the heterogeneity and the heaping effect. On this basis, the lengths of on-going spells of unemployment for first-job seekers and unemployed workers are modelled, as well as the current job tenures of employed and self-employed workers.  相似文献   

11.
This paper presents point and interval estimators of both long-run and single-period target quantities in a simple cost-volume-profit (C-V-P) model. This model is a stochastic version of the “accountant's break-even chart” where the major component is a semivariable cost function. Although these features suggest obvious possibilities for practical application, a major purpose of this paper is to examine the statistical properties of target quantity estimators in C-V-P analysis. It is shown that point estimators of target quantity are biased and possess no moments of positive order, but are consistent. These properties are also shared by previous break-even models, even when all parameters are assumed known with certainty. After a test for positive variable margins, Fieller's [6] method is used to obtain interval estimators of relevant target quantities. This procedure therefore minimizes possible ambiguities in stochastic break-even analysis (noted by Ekern [3]).  相似文献   

12.
As applied to the behavior of homeowners with mortgages, option theory predicts that mortgage prepayment or default will be exercised if the call or put option is ‘in the money’ by some specific amount. Our analysis: tests the extent to which the option approach can explain default and prepayment behavior; evaluates the practical importance of modeling both options simultaneously; and models the unobserved heterogeneity of borrowers in the home mortgage market. The paper presents a unified model of the competing risks of mortgage termination by prepayment and default, considering the two hazards as dependent competing risks that are estimated jointly. It also accounts for the unobserved heterogeneity among borrowers, and estimates the unobserved heterogeneity simultaneously with the parameters and baseline hazards associated with prepayment and default functions. Our results show that the option model, in its most straightforward version, does a good job of explaining default and prepayment, but it is not enough by itself. The simultaneity of the options is very important empirically in explaining behavior. The results also show that there exists significant heterogeneity among mortgage borrowers. Ignoring this heterogeneity results in serious errors in estimating the prepayment behavior of homeowners.  相似文献   

13.
We study the asymptotic distribution of three‐step estimators of a finite‐dimensional parameter vector where the second step consists of one or more nonparametric regressions on a regressor that is estimated in the first step. The first‐step estimator is either parametric or nonparametric. Using Newey's (1994) path‐derivative method, we derive the contribution of the first‐step estimator to the influence function. In this derivation, it is important to account for the dual role that the first‐step estimator plays in the second‐step nonparametric regression, that is, that of conditioning variable and that of argument.  相似文献   

14.
Estimating the unknown minimum (location) of a random variable has received some attention in the statistical literature, but not enough in the area of decision sciences. This is surprising, given that such estimation needs exist often in simulation and global optimization. This study explores the characteristics of two previously used simple percentile estimators of location. The study also identifies a new percentile estimator of the location parameter for the gamma, Weibull, and log-normal distributions with a smaller bias than the other two estimators. The performance of the new estimator, the minimum-bias percentile (MBP) estimator, and the other two percentile estimators are compared using Monte-Carlo simulation. The results indicate that, of the three estimators, the MBP estimator developed in this study provides, in most cases, the estimate with the lowest bias and smallest mean square error of the location for populations drawn from log-normal and gamma or Weibull (but not exponential) distributions. A decision diagram is provided for location estimator selection, based on the value of the coefficient of variation, when the statistical distribution is known or unknown.  相似文献   

15.
We consider empirical measurement of equivalent variation (EV) and compensating variation (CV) resulting from price change of a discrete good using individual‐level data when there is unobserved heterogeneity in preferences. We show that for binary and unordered multinomial choice, the marginal distributions of EV and CV can be expressed as simple closed‐form functionals of conditional choice probabilities under essentially unrestricted preference distributions. These results hold even when the distribution and dimension of unobserved heterogeneity are neither known nor identified, and utilities are neither quasilinear nor parametrically specified. The welfare distributions take simple forms that are easy to compute in applications. In particular, average EV for a price rise equals the change in average Marshallian consumer surplus and is smaller than average CV for a normal good. These nonparametric point‐identification results fail for ordered choice if the unit price is identical for all alternatives, thereby providing a connection to Hausman–Newey's (2014) partial identification results for the limiting case of continuous choice.  相似文献   

16.
This article presents a new importance analysis framework, called parametric moment ratio function, for measuring the reduction of model output uncertainty when the distribution parameters of inputs are changed, and the emphasis is put on the mean and variance ratio functions with respect to the variances of model inputs. The proposed concepts efficiently guide the analyst to achieve a targeted reduction on the model output mean and variance by operating on the variances of model inputs. The unbiased and progressive unbiased Monte Carlo estimators are also derived for the parametric mean and variance ratio functions, respectively. Only a set of samples is needed for implementing the proposed importance analysis by the proposed estimators, thus the computational cost is free of input dimensionality. An analytical test example with highly nonlinear behavior is introduced for illustrating the engineering significance of the proposed importance analysis technique and verifying the efficiency and convergence of the derived Monte Carlo estimators. Finally, the moment ratio function is applied to a planar 10‐bar structure for achieving a targeted 50% reduction of the model output variance.  相似文献   

17.
I introduce a model of undirected dyadic link formation which allows for assortative matching on observed agent characteristics (homophily) as well as unrestricted agent‐level heterogeneity in link surplus (degree heterogeneity). Like in fixed effects panel data analyses, the joint distribution of observed and unobserved agent‐level characteristics is left unrestricted. Two estimators for the (common) homophily parameter, β0, are developed and their properties studied under an asymptotic sequence involving a single network growing large. The first, tetrad logit (TL), estimator conditions on a sufficient statistic for the degree heterogeneity. The second, joint maximum likelihood (JML), estimator treats the degree heterogeneity {Ai0}i = 1N as additional (incidental) parameters to be estimated. The TL estimate is consistent under both sparse and dense graph sequences, whereas consistency of the JML estimate is shown only under dense graph sequences.  相似文献   

18.
This paper studies the estimation of dynamic discrete games of incomplete information. Two main econometric issues appear in the estimation of these models: the indeterminacy problem associated with the existence of multiple equilibria and the computational burden in the solution of the game. We propose a class of pseudo maximum likelihood (PML) estimators that deals with these problems, and we study the asymptotic and finite sample properties of several estimators in this class. We first focus on two‐step PML estimators, which, although they are attractive for their computational simplicity, have some important limitations: they are seriously biased in small samples; they require consistent nonparametric estimators of players' choice probabilities in the first step, which are not always available; and they are asymptotically inefficient. Second, we show that a recursive extension of the two‐step PML, which we call nested pseudo likelihood (NPL), addresses those drawbacks at a relatively small additional computational cost. The NPL estimator is particularly useful in applications where consistent nonparametric estimates of choice probabilities either are not available or are very imprecise, e.g., models with permanent unobserved heterogeneity. Finally, we illustrate these methods in Monte Carlo experiments and in an empirical application to a model of firm entry and exit in oligopoly markets using Chilean data from several retail industries.  相似文献   

19.
Hickey GL  Craig PS 《Risk analysis》2012,32(7):1232-1243
A species sensitivity distribution (SSD) models data on toxicity of a specific toxicant to species in a defined assemblage. SSDs are typically assumed to be parametric, despite noteworthy criticism, with a standard proposal being the log-normal distribution. Recently, and confusingly, there have emerged different statistical methods in the ecotoxicological risk assessment literature, independent of the distributional assumption, for fitting SSDs to toxicity data with the overall aim of estimating the concentration of the toxicant that is hazardous to % of the biological assemblage (usually with small). We analyze two such estimators derived from simple linear regression applied to the ordered log-transformed toxicity data values and probit transformed rank-based plotting positions. These are compared to the more intuitive and statistically defensible confidence limit-based estimator. We conclude based on a large-scale simulation study that the latter estimator should be used in typical assessments where a pointwise value of the hazardous concentration is required.  相似文献   

20.
This paper considers random coefficients binary choice models. The main goal is to estimate the density of the random coefficients nonparametrically. This is an ill‐posed inverse problem characterized by an integral transform. A new density estimator for the random coefficients is developed, utilizing Fourier–Laplace series on spheres. This approach offers a clear insight on the identification problem. More importantly, it leads to a closed form estimator formula that yields a simple plug‐in procedure requiring no numerical optimization. The new estimator, therefore, is easy to implement in empirical applications, while being flexible about the treatment of unobserved heterogeneity. Extensions including treatments of nonrandom coefficients and models with endogeneity are discussed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号