首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In this paper, I analyze a decentralized search and matching economy with transferable utility composed of heterogeneous agents. I explore whether Becker's assortative matching result generalizes to an economy where agents engage in costly search. In an economy with explicit additive search costs, complementarities in joint production (supermodularity of the joint production function) lead to assortative matching. This is in contrast to previous literature, which had shown that in a search economy with discounting, assortative matching may fail even when the joint production function is supermodular.  相似文献   

2.
In Becker's (1973) neoclassical marriage market model, matching is positively assortaive if types are complements: i.e., match output f(x, y) is supermodular in x and y. We reprise this famous result assuming time‐intensive partner search and transferable output. We prove existence of a search equilibrium with a continuum of types, and then characterize matching. After showing that Becker's conditions on match output no longer suffice for assortative matching, we find sufficient conditions valid for any search frictions and type distribution: supermodularity not only of output f, but also of log fx and log fxy. Symmetric submodularity conditions imply negatively assortative matching. Examples show these conditions are necessary.  相似文献   

3.
We study matching and coalition formation environments allowing complementarities and peer effects. Agents have preferences over coalitions, and these preferences vary with an underlying, and commonly known, state of nature. Assuming that there is substantial variability of preferences across states of nature, we show that there exists a core stable coalition structure in every state if and only if agents' preferences are pairwise‐aligned in every state. This implies that there is a stable coalition structure if agents' preferences are generated by Nash bargaining over coalitional outputs. We further show that all stability‐inducing rules for sharing outputs can be represented by a profile of agents' bargaining functions and that agents match assortatively with respect to these bargaining functions. This framework allows us to show how complementarities and peer effects overturn well known comparative statics of many‐to‐one matching.  相似文献   

4.
We develop a search model of marriage where men and women draw utility from private consumption and leisure, and from a non‐market good that is produced in the home using time resources. We condition individual decisions on wages, education, and an index of family attitudes. A match‐specific, stochastic bliss shock induces variation in matching given wages, education, and family values, and triggers renegotiation and divorce. Using BHPS (1991–2008) data, we take as given changes in wages, education, and family values by gender, and study their impact on marriage decisions and intrahousehold resource allocation. The model allows to evaluate how much of the observed gender differences in labor supply results from wages, education, and family attitudes. We find that family attitudes are a strong determinant of comparative advantages in home production of men and women, whereas education complementarities induce assortative mating through preferences.  相似文献   

5.
We study markets in which agents first make investments and are then matched into potentially productive partnerships. Equilibrium investments and the equilibrium matching will be efficient if agents can simultaneously negotiate investments and matches, but we focus on markets in which agents must first sink their investments before matching. Additional equilibria may arise in this sunk‐investment setting, even though our matching market is competitive. These equilibria exhibit inefficiencies that we can interpret as coordination failures. All allocations satisfying a constrained efficiency property are equilibria, and the converse holds if preferences satisfy a separability condition. We identify sufficient conditions (most notably, quasiconcave utilities) for the investments of matched agents to satisfy an exchange efficiency property as well as sufficient conditions (most notably, a single crossing property) for agents to be matched positive assortatively, with these conditions then forming the core of sufficient conditions for the efficiency of equilibrium allocations.  相似文献   

6.
This paper brings together the microeconomic‐labor and the macroeconomic‐equilibrium views of matching in labor markets. We nest a job matching model à la Jovanovic (1984) into a Mortensen and Pissarides (1994)‐type equilibrium search environment. The resulting framework preserves the implications of job matching theory for worker turnover and wage dynamics, and it also allows for aggregation and general equilibrium analysis. We obtain two new equilibrium implications of job matching and search frictions for wage inequality. First, learning about match quality and worker turnover map Gaussian output noise into an ergodic wage distribution of empirically accurate shape: unimodal, skewed, with a Paretian right tail. Second, high idiosyncratic productivity risk hinders learning and sorting, and reduces wage inequality. The equilibrium solutions for the wage distribution and for the aggregate worker flows—quits to unemployment and to other jobs, displacements, hires—provide the likelihood function of the model in closed form.  相似文献   

7.
We develop results for the use of Lasso and post‐Lasso methods to form first‐stage predictions and estimate optimal instruments in linear instrumental variables (IV) models with many instruments, p. Our results apply even when p is much larger than the sample size, n. We show that the IV estimator based on using Lasso or post‐Lasso in the first stage is root‐n consistent and asymptotically normal when the first stage is approximately sparse, that is, when the conditional expectation of the endogenous variables given the instruments can be well‐approximated by a relatively small set of variables whose identities may be unknown. We also show that the estimator is semiparametrically efficient when the structural error is homoscedastic. Notably, our results allow for imperfect model selection, and do not rely upon the unrealistic “beta‐min” conditions that are widely used to establish validity of inference following model selection (see also Belloni, Chernozhukov, and Hansen (2011b)). In simulation experiments, the Lasso‐based IV estimator with a data‐driven penalty performs well compared to recently advocated many‐instrument robust procedures. In an empirical example dealing with the effect of judicial eminent domain decisions on economic outcomes, the Lasso‐based IV estimator outperforms an intuitive benchmark. Optimal instruments are conditional expectations. In developing the IV results, we establish a series of new results for Lasso and post‐Lasso estimators of nonparametric conditional expectation functions which are of independent theoretical and practical interest. We construct a modification of Lasso designed to deal with non‐Gaussian, heteroscedastic disturbances that uses a data‐weighted 1‐penalty function. By innovatively using moderate deviation theory for self‐normalized sums, we provide convergence rates for the resulting Lasso and post‐Lasso estimators that are as sharp as the corresponding rates in the homoscedastic Gaussian case under the condition that logp = o(n1/3). We also provide a data‐driven method for choosing the penalty level that must be specified in obtaining Lasso and post‐Lasso estimates and establish its asymptotic validity under non‐Gaussian, heteroscedastic disturbances.  相似文献   

8.
We study how intermediation and asset prices in over‐the‐counter markets are affected by illiquidity associated with search and bargaining. We compute explicitly the prices at which investors trade with each other, as well as marketmakers' bid and ask prices, in a dynamic model with strategic agents. Bid–ask spreads are lower if investors can more easily find other investors or have easier access to multiple marketmakers. With a monopolistic marketmaker, bid–ask spreads are higher if investors have easier access to the marketmaker. We characterize endogenous search and welfare, and discuss empirical implications.  相似文献   

9.
This paper analyzes the conditions under which consistent estimation can be achieved in instrumental variables (IV) regression when the available instruments are weak and the number of instruments, Kn, goes to infinity with the sample size. We show that consistent estimation depends importantly on the strength of the instruments as measured by rn, the rate of growth of the so‐called concentration parameter, and also on Kn. In particular, when Kn→∞, the concentration parameter can grow, even if each individual instrument is only weakly correlated with the endogenous explanatory variables, and consistency of certain estimators can be established under weaker conditions than have previously been assumed in the literature. Hence, the use of many weak instruments may actually improve the performance of certain point estimators. More specifically, we find that the limited information maximum likelihood (LIML) estimator and the bias‐corrected two‐stage least squares (B2SLS) estimator are consistent when , while the two‐stage least squares (2SLS) estimator is consistent only if Kn/rn→0 as n→∞. These consistency results suggest that LIML and B2SLS are more robust to instrument weakness than 2SLS.  相似文献   

10.
This paper presents a solution to an important econometric problem, namely the root n consistent estimation of nonlinear models with measurement errors in the explanatory variables, when one repeated observation of each mismeasured regressor is available. While a root n consistent estimator has been derived for polynomial specifications (see Hausman, Ichimura, Newey, and Powell (1991)), such an estimator for general nonlinear specifications has so far not been available. Using the additional information provided by the repeated observation, the suggested estimator separates the measurement error from the “true” value of the regressors thanks to a useful property of the Fourier transform: The Fourier transform converts the integral equations that relate the distribution of the unobserved “true” variables to the observed variables measured with error into algebraic equations. The solution to these equations yields enough information to identify arbitrary moments of the “true,” unobserved variables. The value of these moments can then be used to construct any estimator that can be written in terms of moments, including traditional linear and nonlinear least squares estimators, or general extremum estimators. The proposed estimator is shown to admit a representation in terms of an influence function, thus establishing its root n consistency and asymptotic normality. Monte Carlo evidence and an application to Engel curve estimation illustrate the usefulness of this new approach.  相似文献   

11.
We analyze a general search model with on‐the‐job search (OJS) and sorting of heterogeneous workers into heterogeneous jobs. For given values of nonmarket time, the relative efficiency of OJS, and the amount of search frictions, we derive a simple relationship between the unemployment rate, mismatch, and wage dispersion. We estimate the latter two from standard micro data. Our methodology accounts for measurement error, which is crucial to distinguish true from spurious mismatch and wage dispersion. We find that without frictions, output would be about 9.5% higher if firms can commit to pay wages as a function of match quality and 15.5% higher if they cannot. Noncommitment leads to a business‐stealing externality which causes a 5.5% drop in output.  相似文献   

12.
A finite number of sellers (n) compete in schedules to supply an elastic demand. The cost of each seller is random, with common and private value components, and the seller receives a private signal about it. A Bayesian supply function equilibrium is characterized: The equilibrium is privately revealing and the incentives to rely on private signals are preserved. Supply functions are steeper with higher correlation among the cost parameters. For high (positive) correlation, supply functions are downward sloping, price is above the Cournot level, and as we approach the common value case, price tends to the collusive level. As correlation becomes maximally negative, we approach the competitive outcome. With positive correlation, private information coupled with strategic behavior induces additional distortionary market power above full information levels. Efficiency can be restored with appropriate subsidy schemes or with a precise enough public signal about the common value component. As the market grows large with the number of sellers, the equilibrium becomes price‐taking, bid shading is on the order of 1/n, and the order of magnitude of welfare losses is 1/n2. The results extend to inelastic demand, demand uncertainty, and demand schedule competition. A range of applications in product and financial markets is presented.  相似文献   

13.
This paper extends the conditional logit approach (Rasch, Andersen, Chamberlain) used in panel data models of binary variables with correlated fixed effects and strictly exogenous regressors. In a two‐period two‐state model, necessary and sufficient conditions on the joint distribution function of the individual‐and‐period specific shocks are given such that the sum of individual binary variables across time is a sufficient statistic for the individual effect. By extending a result of Chamberlain, it is shown that root‐n consistent regular estimators can be constructed in panel binary models if and only if the property of sufficiency holds. In applied work, the estimation method amounts to quasi‐differencing the binary variables as if they were continuous variables and transforming a panel data model into a cross‐section model. Semiparametric approaches can then be readily applied.  相似文献   

14.
This paper considers inference in a broad class of nonregular models. The models considered are nonregular in the sense that standard test statistics have asymptotic distributions that are discontinuous in some parameters. It is shown in Andrews and Guggenberger (2009a) that standard fixed critical value, subsampling, and m out of n bootstrap methods often have incorrect asymptotic size in such models. This paper introduces general methods of constructing tests and confidence intervals that have correct asymptotic size. In particular, we consider a hybrid subsampling/fixed‐critical‐value method and size‐correction methods. The paper discusses two examples in detail. They are (i) confidence intervals in an autoregressive model with a root that may be close to unity and conditional heteroskedasticity of unknown form and (ii) tests and confidence intervals based on a post‐conservative model selection estimator.  相似文献   

15.
We consider an assemble‐to‐order (ATO) system with multiple products, multiple components which may be demanded in different quantities by different products, possible batch ordering of components, random lead times, and lost sales. We model the system as an infinite‐horizon Markov decision process under the average cost criterion. A control policy specifies when a batch of components should be produced, and whether an arriving demand for each product should be satisfied. Previous work has shown that a lattice‐dependent base‐stock and lattice‐dependent rationing (LBLR) policy is an optimal stationary policy for a special case of the ATO model presented here (the generalized M‐system). In this study, we conduct numerical experiments to evaluate the use of an LBLR policy for our general ATO model as a heuristic, comparing it to two other heuristics from the literature: a state‐dependent base‐stock and state‐dependent rationing (SBSR) policy, and a fixed base‐stock and fixed rationing (FBFR) policy. Remarkably, LBLR yields the globally optimal cost in each of more than 22,500 instances of the general problem, outperforming SBSR and FBFR with respect to both objective value (by up to 2.6% and 4.8%, respectively) and computation time (by up to three orders and one order of magnitude, respectively) in 350 of these instances (those on which we compare the heuristics). LBLR and SBSR perform significantly better than FBFR when replenishment batch sizes imperfectly match the component requirements of the most valuable or most highly demanded product. In addition, LBLR substantially outperforms SBSR if it is crucial to hold a significant amount of inventory that must be rationed.  相似文献   

16.
We study the monotonicity of the equilibrium bid with respect to the number of bidders n in affiliated private‐value models of first‐price sealed‐bid auctions and prove the existence of a large class of such models in which the equilibrium bid function is not increasing in n. We moreover decompose the effect of a change in n on the bid level into a competition effect and an affiliation effect. The latter suggests to the winner of the auction that competition is less intense than she had thought before the auction. Since the affiliation effect can occur in both private‐ and common‐value models, a negative relationship between the bid level and n does not allow one to distinguish between the two models and is also not necessarily (only) due to bidders taking account of the winner's curse.  相似文献   

17.
Moment‐matching discrete distributions were developed by Miller and Rice (1983) as a method to translate continuous probability distributions into discrete distributions for use in decision and risk analysis. Using gaussian quadrature, they showed that an n‐point discrete distribution can be constructed that exactly matches the first 2n ‐ 1 moments of the underlying distribution. These moment‐matching discrete distributions offer several theoretical advantages over the typical discrete approximations as shown in Smith (1993), but they also pose practical problems. In particular, how does the analyst estimate the moments given only the subjective assessments of the continuous probability distribution? Smith suggests that the moments can be estimated by fitting a distribution to the assessments. This research note shows that the quality of the moment estimates cannot be judged solely by how close the fitted distribution is to the true distribution. Examples are used to show that the relative errors in higher order moment estimates can be greater than 100%, even though the cumulative distribution function is estimated within a Kolmogorov‐Smirnov distance less than 1%.  相似文献   

18.
I introduce a model of undirected dyadic link formation which allows for assortative matching on observed agent characteristics (homophily) as well as unrestricted agent‐level heterogeneity in link surplus (degree heterogeneity). Like in fixed effects panel data analyses, the joint distribution of observed and unobserved agent‐level characteristics is left unrestricted. Two estimators for the (common) homophily parameter, β0, are developed and their properties studied under an asymptotic sequence involving a single network growing large. The first, tetrad logit (TL), estimator conditions on a sufficient statistic for the degree heterogeneity. The second, joint maximum likelihood (JML), estimator treats the degree heterogeneity {Ai0}i = 1N as additional (incidental) parameters to be estimated. The TL estimate is consistent under both sparse and dense graph sequences, whereas consistency of the JML estimate is shown only under dense graph sequences.  相似文献   

19.
We study how matchmakers use prices to sort heterogeneous participants into competing matching markets and how equilibrium outcomes compare with monopoly in terms of prices, matching market structure, and sorting efficiency under the assumption of complementarity in the match value function. The role of prices to facilitate sorting is compromised by the need to survive price competition. We show that price competition leads to a high‐quality market that is insufficiently exclusive. As a result, the duopolistic outcome can be less efficient in sorting than the monopoly outcome in terms of total match value in spite of servicing more participants. (JEL: C7, D4)  相似文献   

20.
We consider the bootstrap unit root tests based on finite order autoregressive integrated models driven by iid innovations, with or without deterministic time trends. A general methodology is developed to approximate asymptotic distributions for the models driven by integrated time series, and used to obtain asymptotic expansions for the Dickey–Fuller unit root tests. The second‐order terms in their expansions are of stochastic orders Op(n−1/4) and Op(n−1/2), and involve functionals of Brownian motions and normal random variates. The asymptotic expansions for the bootstrap tests are also derived and compared with those of the Dickey–Fuller tests. We show in particular that the bootstrap offers asymptotic refinements for the Dickey–Fuller tests, i.e., it corrects their second‐order errors. More precisely, it is shown that the critical values obtained by the bootstrap resampling are correct up to the second‐order terms, and the errors in rejection probabilities are of order o(n−1/2) if the tests are based upon the bootstrap critical values. Through simulations, we investigate how effective is the bootstrap correction in small samples.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号