首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
This paper analyzes a sequential search model with adverse selection. We study information aggregation by the price—how close the equilibrium prices are to the full‐information prices—when search frictions are small. We identify circumstances under which prices fail to aggregate information well even when search frictions are small. We trace this to a strong form of the winner's curse that is present in the sequential search model. The failure of information aggregation may result in inefficient allocations.  相似文献   

2.
This paper concerns the two‐stage game introduced in Nash (1953). It formalizes a suggestion made (but not pursued) by Nash regarding equilibrium selection in that game, and hence offers an arguably more solid foundation for the “Nash bargaining with endogenous threats” solution. Analogous reasoning is then applied to an infinite horizon game to provide equilibrium selection in two‐person repeated games with contracts. In this setting, issues about enforcement of threats are much less problematic than in Nash's static setting. The analysis can be extended to stochastic games with contracts.  相似文献   

3.
This paper studies regulated health insurance markets known as exchanges, motivated by the increasingly important role they play in both public and private insurance provision. We develop a framework that combines data on health outcomes and insurance plan choices for a population of insured individuals with a model of a competitive insurance exchange to predict outcomes under different exchange designs. We apply this framework to examine the effects of regulations that govern insurers' ability to use health status information in pricing. We investigate the welfare implications of these regulations with an emphasis on two potential sources of inefficiency: (i) adverse selection and (ii) premium reclassification risk. We find substantial adverse selection leading to full unraveling of our simulated exchange, even when age can be priced. While the welfare cost of adverse selection is substantial when health status cannot be priced, that of reclassification risk is five times larger when insurers can price based on some health status information. We investigate several extensions including (i) contract design regulation, (ii) self‐insurance through saving and borrowing, and (iii) insurer risk adjustment transfers.  相似文献   

4.
We propose a method to correct for sample selection in quantile regression models. Selection is modeled via the cumulative distribution function, or copula, of the percentile error in the outcome equation and the error in the participation decision. Copula parameters are estimated by minimizing a method‐of‐moments criterion. Given these parameter estimates, the percentile levels of the outcome are readjusted to correct for selection, and quantile parameters are estimated by minimizing a rotated “check” function. We apply the method to correct wage percentiles for selection into employment, using data for the UK for the period 1978–2000. We also extend the method to account for the presence of equilibrium effects when performing counterfactual exercises.  相似文献   

5.
This paper examines how sales force impacts competition and equilibrium prices in the context of a privatized pension market. We use detailed administrative data on fund manager choices and worker characteristics at the inception of Mexico's privatized social security system, where fund managers had to set prices (management fees) at the national level, but could select sales force levels by local geographic areas. We develop and estimate a model of fund manager choice where sales force can increase or decrease customer price sensitivity. We find exposure to sales force lowered price sensitivity, leading to inelastic demand and high equilibrium fees. We simulate oft proposed policy solutions: a supply‐side policy with a competitive government player and a demand‐side policy that increases price elasticity. We find that demand‐side policies are necessary to foster competition in social safety net markets with large segments of inelastic consumers.  相似文献   

6.
There is a widely held view within the general public that large corporations should act in the interests of a broader group of agents than just their shareholders (the stakeholder view). This paper presents a framework where this idea can be justified. The point of departure is the observation that a large firm typically faces endogenous risks that may have a significant impact on the workers it employs and the consumers it serves. These risks generate externalities on these stakeholders which are not internalized by shareholders. As a result, in the competitive equilibrium, there is under‐investment in the prevention of these risks. We suggest that this under‐investment problem can be alleviated if firms are instructed to maximize the total welfare of their stakeholders rather than shareholder value alone (stakeholder equilibrium). The stakeholder equilibrium can be implemented by introducing new property rights (employee rights and consumer rights) and instructing managers to maximize the total value of the firm (the value of these rights plus shareholder value). If there is only one firm, the stakeholder equilibrium is Pareto optimal. However, this is not true with more than one firm and/or heterogeneous agents, which illustrates some of the limits of the stakeholder model.  相似文献   

7.
We develop an equilibrium framework that relaxes the standard assumption that people have a correctly specified view of their environment. Each player is characterized by a (possibly misspecified) subjective model, which describes the set of feasible beliefs over payoff‐relevant consequences as a function of actions. We introduce the notion of a Berk–Nash equilibrium: Each player follows a strategy that is optimal given her belief, and her belief is restricted to be the best fit among the set of beliefs she considers possible. The notion of best fit is formalized in terms of minimizing the Kullback–Leibler divergence, which is endogenous and depends on the equilibrium strategy profile. Standard solution concepts such as Nash equilibrium and self‐confirming equilibrium constitute special cases where players have correctly specified models. We provide a learning foundation for Berk–Nash equilibrium by extending and combining results from the statistics literature on misspecified learning and the economics literature on learning in games.  相似文献   

8.
We consider a large market where auctioneers with private reservation values compete for bidders by announcing cheap‐talk messages. If auctioneers run efficient first‐price auctions, then there always exists an equilibrium in which each auctioneer truthfully reveals her type. The equilibrium is constrained efficient, assigning more bidders to auctioneers with larger gains from trade. The choice of the trading mechanism is crucial for the result. Most notably, the use of second‐price auctions (equivalently, ex post bidding) leads to the nonexistence of any informative equilibrium. We examine the robustness of our finding in various dimensions, including finite markets and equilibrium selection.  相似文献   

9.
Kutsal Dogan 《决策科学》2010,41(4):755-785
Consumers need to exert effort to use the incentives provided in a promotion campaign. This effort is critical in the consumers’ decision process and for the success of the campaign. We develop a model of consumer redemption effort that is general in nature and is applicable to coupons, rebates, and other price‐discrimination devices. We find that the impact of redemption effort is quite intricate on a firm’s profit and consumers’ surplus. We find that there are cases where a firm would like to operate in a low redemption cost environment while consumers would be better off with higher costs. We identify cases where price can remain the same with or without the promotion. In these cases, it is possible that the surplus for each individual consumer is higher when a firm price discriminates and improves its profit. Our results indicate that a firm would rather have variation in consumer redemption costs than to have variation in consumer valuations. However, in a market with low valuation variability, consumer redemption cost variability is essential for an efficient promotion campaign. Therefore, the markets that naturally have a lot of variability in consumer valuations should be the ones targeted for online promotion programs that reduce consumer effort levels, not the markets with low variability.  相似文献   

10.
We provide general conditions under which principal‐agent problems with either one or multiple agents admit mechanisms that are optimal for the principal. Our results cover as special cases pure moral hazard and pure adverse selection. We allow multidimensional types, actions, and signals, as well as both financial and non‐financial rewards. Our results extend to situations in which there are ex ante or interim restrictions on the mechanism, and allow the principal to have decisions in addition to choosing the agent's contract. Beyond measurability, we require no a priori restrictions on the space of mechanisms. It is not unusual for randomization to be necessary for optimality and so it (should be and) is permitted. Randomization also plays an essential role in our proof. We also provide conditions under which some forms of randomization are unnecessary.  相似文献   

11.
In the regression‐discontinuity (RD) design, units are assigned to treatment based on whether their value of an observed covariate exceeds a known cutoff. In this design, local polynomial estimators are now routinely employed to construct confidence intervals for treatment effects. The performance of these confidence intervals in applications, however, may be seriously hampered by their sensitivity to the specific bandwidth employed. Available bandwidth selectors typically yield a “large” bandwidth, leading to data‐driven confidence intervals that may be biased, with empirical coverage well below their nominal target. We propose new theory‐based, more robust confidence interval estimators for average treatment effects at the cutoff in sharp RD, sharp kink RD, fuzzy RD, and fuzzy kink RD designs. Our proposed confidence intervals are constructed using a bias‐corrected RD estimator together with a novel standard error estimator. For practical implementation, we discuss mean squared error optimal bandwidths, which are by construction not valid for conventional confidence intervals but are valid with our robust approach, and consistent standard error estimators based on our new variance formulas. In a special case of practical interest, our procedure amounts to running a quadratic instead of a linear local regression. More generally, our results give a formal justification to simple inference procedures based on increasing the order of the local polynomial estimator employed. We find in a simulation study that our confidence intervals exhibit close‐to‐correct empirical coverage and good empirical interval length on average, remarkably improving upon the alternatives available in the literature. All results are readily available in R and STATA using our companion software packages described in Calonico, Cattaneo, and Titiunik (2014d, 2014b).  相似文献   

12.
Empirical studies have delivered mixed conclusions on whether the widely acclaimed assertions of lower electronic retail (e‐tail) prices are true and to what extent these prices impact conventional retail prices, profits, and consumer welfare. For goods that require little in‐person pre‐ or postsales support such as CDs, DVDs, and books, we extend Balasubramanian's e‐tailer‐in‐the‐center, spatial, circular market model to examine the impact of a multichannel e‐tailer's presence on retailers' decisions to relocate, on retail prices and profits, and consumer welfare. We demonstrate several counter‐intuitive results. For example, when the disutility of buying online and shipping costs are relatively low, retailers are better off by not relocating in response to an e‐tailer's entry into the retail channel. In addition, such an entry—a multichannel strategy—may lead to increased retail prices and increased profits across the industry. Finally, consumers can be better off with less channel competition. The underlying message is that inferences regarding prices, profits, and consumer welfare critically depend on specifications of the good, disutility and shipping costs versus transportation costs (or more generally, positioning), and competition.  相似文献   

13.
Intention theories, such as the Theory of Reasoned Action, the Theory of Planned Behavior, and the Technology Acceptance Model (TAM), have been widely adopted to explain information system usage. These theories, however, do not explicitly consider the availability of alternative systems that users may have access to and may have a preference for. Recent calls for advancing knowledge in technology acceptance have included the examination of selection among competing channels and extending the investigation beyond adoption of a single technology. In this study, we provide a theoretical extension to the TAM by integrating preferential decision knowledge to its constructs. The concept of Attitude‐Based Preference and Attribute‐Based Preference are introduced to produce a new intention model, namely the Model of Technology Preference (MTP). MTP was validated in the context of alternative behaviors in adopting two service channels: one a technology‐based online store and the other a traditional brick‐and‐mortar store. A sample of 320 responses was used to run a structural equation model. Empirical results show that MTP is a powerful predictor of alternative behaviors. Furthermore, in the context of service channel selection, incorporating preferential decision knowledge into intention models can be used to develop successful business strategies.  相似文献   

14.
We study from both a theoretical and an empirical perspective how a network of military alliances and enmities affects the intensity of a conflict. The model combines elements from network theory and from the politico‐economic theory of conflict. We obtain a closed‐form characterization of the Nash equilibrium. Using the equilibrium conditions, we perform an empirical analysis using data on the Second Congo War, a conflict that involves many groups in a complex network of informal alliances and rivalries. The estimates of the fighting externalities are then used to infer the extent to which the conflict intensity can be reduced through (i) dismantling specific fighting groups involved in the conflict; (ii) weapon embargoes; (iii) interventions aimed at pacifying animosity among groups. Finally, with the aid of a random utility model, we study how policy shocks can induce a reshaping of the network structure.  相似文献   

15.
This paper examines how prices, markups, and marginal costs respond to trade liberalization. We develop a framework to estimate markups from production data with multi‐product firms. This approach does not require assumptions on the market structure or demand curves faced by firms, nor assumptions on how firms allocate their inputs across products. We exploit quantity and price information to disentangle markups from quantity‐based productivity, and then compute marginal costs by dividing observed prices by the estimated markups. We use India's trade liberalization episode to examine how firms adjust these performance measures. Not surprisingly, we find that trade liberalization lowers factory‐gate prices and that output tariff declines have the expected pro‐competitive effects. However, the price declines are small relative to the declines in marginal costs, which fall predominantly because of the input tariff liberalization. The reason for this incomplete cost pass‐through to prices is that firms offset their reductions in marginal costs by raising markups. Our results demonstrate substantial heterogeneity and variability in markups across firms and time and suggest that producers benefited relative to consumers, at least immediately after the reforms.  相似文献   

16.
We document abrupt increases in retail beer prices just after the consummation of the MillerCoors joint venture, both for MillerCoors and its major competitor, Anheuser‐Busch. Within the context of a differentiated‐products pricing model, we test and reject the hypothesis that the price increases can be explained by movement from one Nash–Bertrand equilibrium to another. Counterfactual simulations imply that prices after the joint venture are 6%–8% higher than they would have been with Nash–Bertrand competition, and that markups are 17%–18% higher. We relate the results to documentary evidence that the joint venture may have facilitated price coordination.  相似文献   

17.
Our paper provides a complete characterization of leverage and default in binomial economies with financial assets serving as collateral. Our Binomial No‐Default Theorem states that any equilibrium is equivalent (in real allocations and prices) to another equilibrium in which there is no default. Thus actual default is irrelevant, though the potential for default drives the equilibrium and limits borrowing. This result is valid with arbitrary preferences and endowments, contingent or noncontingent promises, many assets and consumption goods, production, and multiple periods. We also show that only no‐default equilibria would be selected if there were the slightest cost of using collateral or handling default. Our Binomial Leverage Theorem shows that equilibrium Loan to Value (LTV) for noncontingent debt contracts is the ratio of the worst‐case return of the asset to the riskless gross rate of interest. In binomial economies, leverage is determined by down risk and not by volatility.  相似文献   

18.
Online markets, like eBay, Amazon, and others rely on electronic reputation or feedback systems to curtail adverse selection and moral hazard risks and promote trust among participants in the marketplace. These systems are based on the idea that providing information about a trader's past behavior (performance on previous market transactions) allows market participants to form judgments regarding the trustworthiness of potential interlocutors in the marketplace. It is often assumed, however, that traders correctly process the data presented by these systems when updating their initial beliefs. In this article, we demonstrate that this assumption does not hold. Using a controlled laboratory experiment simulating an online auction site with 127 participants acting as buyers, we find that participants interpret seller feedback information in a biased (non‐Bayesian) fashion, overemphasizing the compositional strength (i.e., the proportion of positive ratings) of the reputational information and underemphasizing the weight (predictive validity) of the evidence as represented by the total number of transactions rated. Significantly, we also find that the degree to which buyers misweigh seller feedback information is moderated by the presentation format of the feedback system as well as attitudinal and psychological attributes of the buyer. Specifically, we find that buyers process feedback data presented in an Amazon‐like format—a format that more prominently emphasizes the strength dimension of feedback information—in a more biased (less‐Bayesian) manner than identical ratings data presented using an eBay‐like format. We further find that participants with greater institution‐based trust (i.e., structural assurance) and prior online shopping experience interpreted feedback data in a more biased (less‐Bayesian) manner. The implications of these findings for both research and practice are discussed.  相似文献   

19.
In this paper, we provide efficient estimators and honest confidence bands for a variety of treatment effects including local average (LATE) and local quantile treatment effects (LQTE) in data‐rich environments. We can handle very many control variables, endogenous receipt of treatment, heterogeneous treatment effects, and function‐valued outcomes. Our framework covers the special case of exogenous receipt of treatment, either conditional on controls or unconditionally as in randomized control trials. In the latter case, our approach produces efficient estimators and honest bands for (functional) average treatment effects (ATE) and quantile treatment effects (QTE). To make informative inference possible, we assume that key reduced‐form predictive relationships are approximately sparse. This assumption allows the use of regularization and selection methods to estimate those relations, and we provide methods for post‐regularization and post‐selection inference that are uniformly valid (honest) across a wide range of models. We show that a key ingredient enabling honest inference is the use of orthogonal or doubly robust moment conditions in estimating certain reduced‐form functional parameters. We illustrate the use of the proposed methods with an application to estimating the effect of 401(k) eligibility and participation on accumulated assets. The results on program evaluation are obtained as a consequence of more general results on honest inference in a general moment‐condition framework, which arises from structural equation models in econometrics. Here, too, the crucial ingredient is the use of orthogonal moment conditions, which can be constructed from the initial moment conditions. We provide results on honest inference for (function‐valued) parameters within this general framework where any high‐quality, machine learning methods (e.g., boosted trees, deep neural networks, random forest, and their aggregated and hybrid versions) can be used to learn the nonparametric/high‐dimensional components of the model. These include a number of supporting auxiliary results that are of major independent interest: namely, we (1) prove uniform validity of a multiplier bootstrap, (2) offer a uniformly valid functional delta method, and (3) provide results for sparsity‐based estimation of regression functions for function‐valued outcomes.  相似文献   

20.
This paper develops a theory of optimal provision of commitment devices to people who value both commitment and flexibility and whose preferences differ in the degree of time inconsistency. If time inconsistency is observable, both a planner and a monopolist provide devices that help each person commit to the efficient level of flexibility. However, the combination of unobservable time inconsistency and preference for flexibility causes an adverse‐selection problem. To solve this problem, the monopolist and (possibly) the planner curtail flexibility in the device for a more inconsistent person at both ends of the efficient choice range; moreover, they may have to add unused options to the device for a less inconsistent person and also distort his actual choices. This theory has normative and positive implications for private and public provision of commitment devices.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号