首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Organizational decision making is dominated by teams. When an important decision is required, a team is often formed to make it or to advise the individual decision maker, because a team has more resources, knowledge, and political insight than any one individual working alone. As teams have become geographically distributed, collaboration technology has come to play an important role in such collective decision making efforts. Instant messaging (IM) is an increasingly prevalent workplace collaboration technology that enables near‐synchronous text exchanges on a variety of devices. We examined the use of IM during face‐to‐face, telephone, and computer‐mediated team meetings, a practice we call “invisible whispering.” We introduce Goffman's characterization of social interaction as dramatic performance, differentiable into “front stage” and “backstage” exchanges, to analyze how invisible whispering alters the socio‐spatial and temporal boundaries of team decision making. Using IM, workers were able to influence front stage decision making through backstage conversations, often participating in multiple backstage conversations simultaneously. This type of interaction would be either physically impossible or socially constrained without the use of IM. We examine how invisible whispering changes the processes of collaborative decision making and how these new processes may affect the efficiency and effectiveness of collaborative decision making, as well as participation, satisfaction, relationships among team members, and individual attention.  相似文献   

2.
This paper studies regulated health insurance markets known as exchanges, motivated by the increasingly important role they play in both public and private insurance provision. We develop a framework that combines data on health outcomes and insurance plan choices for a population of insured individuals with a model of a competitive insurance exchange to predict outcomes under different exchange designs. We apply this framework to examine the effects of regulations that govern insurers' ability to use health status information in pricing. We investigate the welfare implications of these regulations with an emphasis on two potential sources of inefficiency: (i) adverse selection and (ii) premium reclassification risk. We find substantial adverse selection leading to full unraveling of our simulated exchange, even when age can be priced. While the welfare cost of adverse selection is substantial when health status cannot be priced, that of reclassification risk is five times larger when insurers can price based on some health status information. We investigate several extensions including (i) contract design regulation, (ii) self‐insurance through saving and borrowing, and (iii) insurer risk adjustment transfers.  相似文献   

3.
We introduce the class of conditional linear combination tests, which reject null hypotheses concerning model parameters when a data‐dependent convex combination of two identification‐robust statistics is large. These tests control size under weak identification and have a number of optimality properties in a conditional problem. We show that the conditional likelihood ratio test of Moreira, 2003 is a conditional linear combination test in models with one endogenous regressor, and that the class of conditional linear combination tests is equivalent to a class of quasi‐conditional likelihood ratio tests. We suggest using minimax regret conditional linear combination tests and propose a computationally tractable class of tests that plug in an estimator for a nuisance parameter. These plug‐in tests perform well in simulation and have optimal power in many strongly identified models, thus allowing powerful identification‐robust inference in a wide range of linear and nonlinear models without sacrificing efficiency if identification is strong.  相似文献   

4.
In order to rescue information technology (IT) projects when they go awry, it is critical to understand the factors that affect bad news reporting. Whistleblowing theory holds promise in this regard and a number of salient factors that may influence whistleblowing intentions have been identified. However, an integrative theory that explains how they influence whistleblowing intentions has been conspicuously absent. In this research, we introduce and test a middle‐range theory of whistleblowing that can explain how and why a variety of factors may influence an individual's whistleblowing intentions. Drawing on the social information processing perspective, we propose that individuals holistically weigh the perceived “benefit‐to‐cost differential” and that this mediates the relationship between whistleblowing factors and whistleblowing intentions. Tests using data collected from 159 experienced IT project managers largely support our theoretical perspective, in which the central explanatory variable (benefit‐to‐cost differential) significantly mediates a majority of the proposed relationships. Implications of these findings for research and practice are discussed.  相似文献   

5.
This paper considers mechanism design problems in environments with ambiguity‐sensitive individuals. The novel idea is to introduce ambiguity in mechanisms so as to exploit the ambiguity sensitivity of individuals. Deliberate engineering of ambiguity, through ambiguous mediated communication, can allow (partial) implementation of social choice functions that are not incentive compatible with respect to prior beliefs. We provide a complete characterization of social choice functions partially implementable by ambiguous mechanisms.  相似文献   

6.
The bootstrap is a convenient tool for calculating standard errors of the parameter estimates of complicated econometric models. Unfortunately, the fact that these models are complicated often makes the bootstrap extremely slow or even practically infeasible. This paper proposes an alternative to the bootstrap that relies only on the estimation of one‐dimensional parameters. We introduce the idea in the context of M and GMM estimators. A modification of the approach can be used to estimate the variance of two‐step estimators.  相似文献   

7.
Both aristocratic privileges and constitutional constraints in traditional monarchies can be derived from a ruler's incentive to minimize expected costs of moral‐hazard rents for high officials. We consider a dynamic moral‐hazard model of governors serving a sovereign prince, who must deter them from rebellion and hidden corruption which could cause costly crises. To minimize costs, a governor's rewards for good performance should be deferred up to the maximal credit that the prince can be trusted to pay. In the long run, we find that high officials can become an entrenched aristocracy with low turnover and large claims on the ruler. Dismissals for bad performance should be randomized to avoid inciting rebellions, but the prince can profit from reselling vacant offices, and so his decisions to dismiss high officials require institutionalized monitoring. A soft budget constraint that forgives losses for low‐credit governors can become efficient when costs of corruption are low.  相似文献   

8.
We develop a continuum player timing game that subsumes standard wars of attrition and pre‐emption games, and introduces a new rushes phenomenon. Payoffs are continuous and single‐peaked functions of the stopping time and stopping quantile. We show that if payoffs are hump‐shaped in the quantile, then a sudden “rush” of players stops in any Nash or subgame perfect equilibrium. Fear relaxes the first mover advantage in pre‐emption games, asking that the least quantile beat the average; greed relaxes the last mover advantage in wars of attrition, asking just that the last quantile payoff exceed the average. With greed, play is inefficiently late: an accelerating war of attrition starting at optimal time, followed by a rush. With fear, play is inefficiently early: a slowing pre‐emption game, ending at the optimal time, preceded by a rush. The theory predicts the length, duration, and intensity of stopping, and the size and timing of rushes, and offers insights for many common timing games.  相似文献   

9.
We develop an extension of Luce's random choice model to study violations of the weak axiom of revealed preference. We introduce the notion of a stochastic preference and show that it implies the Luce model. Then, to address well‐known difficulties of the Luce model, we define the attribute rule and establish that the existence of a well‐defined stochastic preference over attributes characterizes it. We prove that the set of attribute rules and random utility maximizers are essentially the same. Finally, we show that both the Luce and attribute rules have a unique consistent extension to dynamic problems.  相似文献   

10.
This paper shows that the problem of testing hypotheses in moment condition models without any assumptions about identification may be considered as a problem of testing with an infinite‐dimensional nuisance parameter. We introduce a sufficient statistic for this nuisance parameter in a Gaussian problem and propose conditional tests. These conditional tests have uniformly correct asymptotic size for a large class of models and test statistics. We apply our approach to construct tests based on quasi‐likelihood ratio statistics, which we show are efficient in strongly identified models and perform well relative to existing alternatives in two examples.  相似文献   

11.
EXcess Idle Time     
We introduce a novel economic indicator, named excess idle time (EXIT), measuring the extent of sluggishness in financial prices. Under a null and an alternative hypothesis grounded in no‐arbitrage (the null) and market microstructure (the alternative) theories of price determination, we derive a limit theory for EXIT leading to formal tests for staleness in the price adjustments. Empirical implementation of the theory indicates that financial prices are often more sluggish than implied by the (ubiquitous, in frictionless continuous‐time asset pricing) semimartingale assumption. EXIT is interpretable as an illiquidity proxy and is easily implementable, for each trading day, using transaction prices only. By using EXIT, we show how to estimate structurally market microstructure models with asymmetric information.  相似文献   

12.
13.
Even though it is widely acknowledged that collaboration underlies much of the decision‐making efforts in contemporary organizations, and that organizational groups are increasingly making decisions that have ethical implications, few studies have examined group ethical decision‐making processes and outcomes. In addition, while there is increasing evidence that groups often collaborate/communicate using different mediating technologies, few studies have examined the effect of the characteristics of the media in group ethical decision‐making contexts. Finally, there is a clear paucity of studies that have investigated group decision making pertaining to information technology (IT)‐related ethical dilemmas, an area of rising importance for information systems (IS) and decision science researchers. This article seeks to address the gaps described above through an experimental study where groups collaborating either in a face‐to‐face context or in a computer‐mediated context (using NetMeeting or Wiki) were required to make a decision with respect to a scenario with an IT‐related ethical dilemma. Results indicate that media characteristics (e.g., anonymity, immediacy of feedback, parallelism) do not have an effect on whether groups make ethical (or unethical) decisions. However, several media characteristics were found to play a significant role on downstream variables, such as the quality of a follow‐up task (i.e., creation of a decision justification document), and overall process satisfaction of the group members.  相似文献   

14.
We study the identification through instruments of a nonseparable function that relates a continuous outcome to a continuous endogenous variable. Using group and dynamical systems theories, we show that full identification can be achieved under strong exogeneity of the instrument and a dual monotonicity condition, even if the instrument is discrete. When identified, the model is also testable. Our results therefore highlight the identifying power of strong exogeneity when combined with monotonicity restrictions.  相似文献   

15.
We develop an equilibrium framework that relaxes the standard assumption that people have a correctly specified view of their environment. Each player is characterized by a (possibly misspecified) subjective model, which describes the set of feasible beliefs over payoff‐relevant consequences as a function of actions. We introduce the notion of a Berk–Nash equilibrium: Each player follows a strategy that is optimal given her belief, and her belief is restricted to be the best fit among the set of beliefs she considers possible. The notion of best fit is formalized in terms of minimizing the Kullback–Leibler divergence, which is endogenous and depends on the equilibrium strategy profile. Standard solution concepts such as Nash equilibrium and self‐confirming equilibrium constitute special cases where players have correctly specified models. We provide a learning foundation for Berk–Nash equilibrium by extending and combining results from the statistics literature on misspecified learning and the economics literature on learning in games.  相似文献   

16.
We propose a novel technique to boost the power of testing a high‐dimensional vector H : θ = 0 against sparse alternatives where the null hypothesis is violated by only a few components. Existing tests based on quadratic forms such as the Wald statistic often suffer from low powers due to the accumulation of errors in estimating high‐dimensional parameters. More powerful tests for sparse alternatives such as thresholding and extreme value tests, on the other hand, require either stringent conditions or bootstrap to derive the null distribution and often suffer from size distortions due to the slow convergence. Based on a screening technique, we introduce a “power enhancement component,” which is zero under the null hypothesis with high probability, but diverges quickly under sparse alternatives. The proposed test statistic combines the power enhancement component with an asymptotically pivotal statistic, and strengthens the power under sparse alternatives. The null distribution does not require stringent regularity conditions, and is completely determined by that of the pivotal statistic. The proposed methods are then applied to testing the factor pricing models and validating the cross‐sectional independence in panel data models.  相似文献   

17.
The theory of continuous time games (Simon and Stinchcombe (1989), Bergin and MacLeod (1993)) shows that continuous time interactions can generate very different equilibrium behavior than conventional discrete time interactions. We introduce new laboratory methods that allow us to eliminate natural inertia in subjects' decisions in continuous time experiments, thereby satisfying critical premises of the theory and enabling a first‐time direct test. Applying these new methods to a simple timing game, we find strikingly large gaps in behavior between discrete and continuous time as the theory suggests. Reintroducing natural inertia into these games causes continuous time behavior to collapse to discrete time‐like levels in some settings as predicted by subgame perfect Nash equilibrium. However, contra this prediction, the strength of this effect is fundamentally shaped by the severity of inertia: behavior tends towards discrete time benchmarks as inertia grows large and perfectly continuous time benchmarks as it falls towards zero. We provide evidence that these results are due to changes in the nature of strategic uncertainty as inertia approaches the continuous limit.  相似文献   

18.
The past forty years have seen a rapid rise in top income inequality in the United States. While there is a large number of existing theories of the Pareto tail of the long‐run income distributions, almost none of these address the fast rise in top inequality observed in the data. We show that standard theories, which build on a random growth mechanism, generate transition dynamics that are too slow relative to those observed in the data. We then suggest two parsimonious deviations from the canonical model that can explain such changes: “scale dependence” that may arise from changes in skill prices, and “type dependence,” that is, the presence of some “high‐growth types.” These deviations are consistent with theories in which the increase in top income inequality is driven by the rise of “superstar” entrepreneurs or managers.  相似文献   

19.
In this paper, we provide efficient estimators and honest confidence bands for a variety of treatment effects including local average (LATE) and local quantile treatment effects (LQTE) in data‐rich environments. We can handle very many control variables, endogenous receipt of treatment, heterogeneous treatment effects, and function‐valued outcomes. Our framework covers the special case of exogenous receipt of treatment, either conditional on controls or unconditionally as in randomized control trials. In the latter case, our approach produces efficient estimators and honest bands for (functional) average treatment effects (ATE) and quantile treatment effects (QTE). To make informative inference possible, we assume that key reduced‐form predictive relationships are approximately sparse. This assumption allows the use of regularization and selection methods to estimate those relations, and we provide methods for post‐regularization and post‐selection inference that are uniformly valid (honest) across a wide range of models. We show that a key ingredient enabling honest inference is the use of orthogonal or doubly robust moment conditions in estimating certain reduced‐form functional parameters. We illustrate the use of the proposed methods with an application to estimating the effect of 401(k) eligibility and participation on accumulated assets. The results on program evaluation are obtained as a consequence of more general results on honest inference in a general moment‐condition framework, which arises from structural equation models in econometrics. Here, too, the crucial ingredient is the use of orthogonal moment conditions, which can be constructed from the initial moment conditions. We provide results on honest inference for (function‐valued) parameters within this general framework where any high‐quality, machine learning methods (e.g., boosted trees, deep neural networks, random forest, and their aggregated and hybrid versions) can be used to learn the nonparametric/high‐dimensional components of the model. These include a number of supporting auxiliary results that are of major independent interest: namely, we (1) prove uniform validity of a multiplier bootstrap, (2) offer a uniformly valid functional delta method, and (3) provide results for sparsity‐based estimation of regression functions for function‐valued outcomes.  相似文献   

20.
A firm's distribution channels represent a key portfolio of resources that can be leveraged for competitive advantage. One approach to this portfolio that has become increasingly important in recent years is multichannel distribution (MCD). While this strategy has important benefits in terms of market coverage and firm performance, the use of multiple channels seriously affects downstream channel roles such as service delivery, as the financial rewards to channel members and the services they offer are separated. A channel member who offers poor or no service can free‐ride on the services offered to the same customer from a different channel. We draw on agency theory to explain these negative consequences. Additionally, the resource‐based view of the firm along with capabilities theory provides two key means of alleviating these consequences: channel tracking capabilities and reward alignment capabilities. The study, conducted in an industry facing serious MCD issues (the outdoor sporting goods industry), used key informant data matched to secondary data. Our results show that managers can reap the performance rewards of MCD strategies while minimizing its negative consequences. In particular, monitoring practices such as frequent site visits and phone contact with customers develop the firm's channel tracking capabilities, allowing managers to better monitor downstream activities. This becomes particularly important as the complexity from having multiple channels increases. Likewise, reward alignment capabilities such as retail price maintenance agreements and cooperative advertising enable the manager to minimize conflict among channel participants by ensuring sufficient profitability for all channel members.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号