首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
赵子夜  杨庆  杨楠 《管理科学》2019,22(3):53-70
样板化报告在古今中外都有广泛的运用, 但报告者面临两难:一方面, 样板化有利于规避披露风险;但另一方面, 样板化又不利于传递内部信息.那么, 投资者如何评价中国上市公司的报告的样板化程度?以中国上市公司的管理层讨论与分析的文字为样本, 用公司t期和t-1期报告的纵向文本相似度以及本公司和其他公司同期的报告的平均横向相似度来衡量样板化的水平, 并考察了其经济后果.检验结果表明, 纵向样板化的经济后果呈现相机抉择性, 当公司财务风险高 (亏损、微利或者被出具非标准审计意见) 时, 信息效应占优, 样板化的报告引发负面的市场评价, 而当公司财务风险较低, 风险效应占优, 样板化的报告则引发市场的好评.另一方面, 报告横向样板化则引起了整体的负面评价.在调节效应方面, 纵向样板化的经济后果受公司创新、特质信息、董事长权力和停牌次数的影响, 横向样板化的经济后果则受到公司独立董事的社会网络位置的影响.综合结果表明, 公司管理层讨论与分析的横向样板化, 以及在高财务风险条件下的纵向样本化都会因信息披露不足而引起负面的经济后果.  相似文献   

2.
In developing countries, farmers lack information for making informed production, manufacturing/selling decisions to improve their earnings. To alleviate poverty, various non‐governmental organizations (NGOs) and for‐profit companies have developed different ways to distribute information about market price, crop advisory and farming technique to farmers. We investigate a fundamental question: will information create economic value for farmers? We construct a stylized model in which farmers face an uncertain market price (demand) and must make production decisions before the market price is realized. Each farmer has an imprecise private signal and an imprecise public signal to estimate the actual market price. By examining the equilibrium outcomes associated with a Cournot competition game, we show that private signals do create value by improving farmers' welfare. However, this value deteriorates as the public signal becomes available (or more precise). In contrast, in the presence of private signals, the public signal does not always create value for the farmers. Nevertheless, both private and public signals will reduce price variation. We also consider two separate extensions that involve non‐identical private signal precisions and farmers' risk‐aversion, and we find that the same results continue to hold. More importantly, we find that the public signal can reduce welfare inequality when farmers have non‐identical private signal precisions. Also, risk‐aversion can dampen the value created by private or public information.  相似文献   

3.
Should capacitated firms set prices responsively to uncertain market conditions in a competitive environment? We study a duopoly selling differentiated substitutable products with fixed capacities under demand uncertainty, where firms can either commit to a fixed price ex ante, or elect to price contingently ex post, e.g., to charge high prices in booming markets, and low prices in slack markets. Interestingly, we analytically show that even for completely symmetric model primitives, asymmetric equilibria of strategic pricing decisions may arise, in which one firm commits statically and the other firm prices contingently; in this case, there also exists a unique mixed strategy equilibrium. Such equilibrium behavior tends to emerge, when capacity is ampler, and products are less differentiated or demand uncertainty is lower. With asymmetric fixed capacities, if demand uncertainty is low, a unique asymmetric equilibrium emerges, in which the firm with more capacity chooses committed pricing and the firm with less capacity chooses contingent pricing. We identify two countervailing profit effects of contingent pricing under competition: gains from responsively charging high price under high demand, and losses from intensified price competition under low demand. It is the latter detrimental effect that may prevent both firms from choosing a contingent pricing strategy in equilibrium. We show that the insights remain valid when capacity decisions are endogenized. We caution that responsive price changes under aggressive competition of less differentiated products can result in profit‐killing discounting.  相似文献   

4.
The EO in OBM     
Abstract

Olson, Laraway, and Austin (2001) propose an increased emphasis on the establishing operation in organizational behavior management. Their proposal raises interesting questions about theory, science, and practice. (1) What should be the role of theory in behavior analysis? (2) Should we try to find problems that match our solutions or vice versa ? (3) What is the relative importance of the establishing operation and the performance-management contingency in managing organizational behavior? (4) Should theory and basic research be more informed by the issues raised in applied settings?  相似文献   

5.
Bruno Decreuse 《LABOUR》2002,16(4):609-633
Should we cut the level of unemployment benefits, or reduce their potential duration? The answer depends on the way the unemployed search behaviour and unemployment insurance schemes interact. In this paper, we consider that unemployment insurance funds can be used to improve search. Resulting hazards are increasing over the unemployment spell prior to the exhaustion of benefits, and plummet immediately after it. Turning to policy implications, we assume the public decision–maker aims to minimize the average duration of unemployment under a resource constraint. First, we show the stationary relationship between average unemployment duration and unemployment benefit is hump–shaped. Second, raising benefits over a short duration can reduce average duration. Finally, we demonstrate that most of the time, a declining (yet always positive) benefit scheme is optimal.  相似文献   

6.
Louis Anthony Cox  Jr. 《Risk analysis》2009,29(8):1062-1068
Risk analysts often analyze adversarial risks from terrorists or other intelligent attackers without mentioning game theory. Why? One reason is that many adversarial situations—those that can be represented as attacker‐defender games, in which the defender first chooses an allocation of defensive resources to protect potential targets, and the attacker, knowing what the defender has done, then decides which targets to attack—can be modeled and analyzed successfully without using most of the concepts and terminology of game theory. However, risk analysis and game theory are also deeply complementary. Game‐theoretic analyses of conflicts require modeling the probable consequences of each choice of strategies by the players and assessing the expected utilities of these probable consequences. Decision and risk analysis methods are well suited to accomplish these tasks. Conversely, game‐theoretic formulations of attack‐defense conflicts (and other adversarial risks) can greatly improve upon some current risk analyses that attempt to model attacker decisions as random variables or uncertain attributes of targets (“threats”) and that seek to elicit their values from the defender's own experts. Game theory models that clarify the nature of the interacting decisions made by attackers and defenders and that distinguish clearly between strategic choices (decision nodes in a game tree) and random variables (chance nodes, not controlled by either attacker or defender) can produce more sensible and effective risk management recommendations for allocating defensive resources than current risk scoring models. Thus, risk analysis and game theory are (or should be) mutually reinforcing.  相似文献   

7.
In weighted moment condition models, we show a subtle link between identification and estimability that limits the practical usefulness of estimators based on these models. In particular, if it is necessary for (point) identification that the weights take arbitrarily large values, then the parameter of interest, though point identified, cannot be estimated at the regular (parametric) rate and is said to be irregularly identified. This rate depends on relative tail conditions and can be as slow in some examples as n−1/4. This nonstandard rate of convergence can lead to numerical instability and/or large standard errors. We examine two weighted model examples: (i) the binary response model under mean restriction introduced by Lewbel (1997) and further generalized to cover endogeneity and selection, where the estimator in this class of models is weighted by the density of a special regressor, and (ii) the treatment effect model under exogenous selection (Rosenbaum and Rubin (1983)), where the resulting estimator of the average treatment effect is one that is weighted by a variant of the propensity score. Without strong relative support conditions, these models, similar to well known “identified at infinity” models, lead to estimators that converge at slower than parametric rate, since essentially, to ensure point identification, one requires some variables to take values on sets with arbitrarily small probabilities, or thin sets. For the two models above, we derive some rates of convergence and propose that one conducts inference using rate adaptive procedures that are analogous to Andrews and Schafgans (1998) for the sample selection model.  相似文献   

8.
This paper develops a framework for performing estimation and inference in econometric models with partial identification, focusing particularly on models characterized by moment inequalities and equalities. Applications of this framework include the analysis of game‐theoretic models, revealed preference restrictions, regressions with missing and corrupted data, auction models, structural quantile regressions, and asset pricing models. Specifically, we provide estimators and confidence regions for the set of minimizers ΘI of an econometric criterion function Q(θ). In applications, the criterion function embodies testable restrictions on economic models. A parameter value θthat describes an economic model satisfies these restrictions if Q(θ) attains its minimum at this value. Interest therefore focuses on the set of minimizers, called the identified set. We use the inversion of the sample analog, Qn(θ), of the population criterion, Q(θ), to construct estimators and confidence regions for the identified set, and develop consistency, rates of convergence, and inference results for these estimators and regions. To derive these results, we develop methods for analyzing the asymptotic properties of sample criterion functions under set identification.  相似文献   

9.
In ‘experience-weighted attraction’ (EWA) learning, strategies have attractions that reflect initial predispositions, are updated based on payoff experience, and determine choice probabilities according to some rule (e.g., logit). A key feature is a parameter δ that weights the strength of hypothetical reinforcement of strategies that were not chosen according to the payoff they would have yielded, relative to reinforcement of chosen strategies according to received payoffs. The other key features are two discount rates, φ and ρ, which separately discount previous attractions, and an experience weight. EWA includes reinforcement learning and weighted fictitious play (belief learning) as special cases, and hybridizes their key elements. When δ= 0 and ρ= 0, cumulative choice reinforcement results. When δ= 1 and ρ=φ, levels of reinforcement of strategies are exactly the same as expected payoffs given weighted fictitious play beliefs. Using three sets of experimental data, parameter estimates of the model were calibrated on part of the data and used to predict a holdout sample. Estimates of δ are generally around .50, φ around .8 − 1, and ρ varies from 0 to φ. Reinforcement and belief-learning special cases are generally rejected in favor of EWA, though belief models do better in some constant-sum games. EWA is able to combine the best features of previous approaches, allowing attractions to begin and grow flexibly as choice reinforcement does, but reinforcing unchosen strategies substantially as belief-based models implicitly do.  相似文献   

10.
We propose a framework for out‐of‐sample predictive ability testing and forecast selection designed for use in the realistic situation in which the forecasting model is possibly misspecified, due to unmodeled dynamics, unmodeled heterogeneity, incorrect functional form, or any combination of these. Relative to the existing literature (Diebold and Mariano (1995) and West (1996)), we introduce two main innovations: (i) We derive our tests in an environment where the finite sample properties of the estimators on which the forecasts may depend are preserved asymptotically. (ii) We accommodate conditional evaluation objectives (can we predict which forecast will be more accurate at a future date?), which nest unconditional objectives (which forecast was more accurate on average?), that have been the sole focus of previous literature. As a result of (i), our tests have several advantages: they capture the effect of estimation uncertainty on relative forecast performance, they can handle forecasts based on both nested and nonnested models, they allow the forecasts to be produced by general estimation methods, and they are easy to compute. Although both unconditional and conditional approaches are informative, conditioning can help fine‐tune the forecast selection to current economic conditions. To this end, we propose a two‐step decision rule that uses current information to select the best forecast for the future date of interest. We illustrate the usefulness of our approach by comparing forecasts from leading parameter‐reduction methods for macroeconomic forecasting using a large number of predictors.  相似文献   

11.
It is shown that an exponentially small departure from the common knowledge assumption on the number T of repetitions of the prisoners' dilemma already enables cooperation. More generally, with such a departure, any feasible individually rational outcome of any one-shot game can be approximated by a subgame perfect equilibrium of a finitely repeated version of that game. The sense in which the departure from common knowledge is small is as follows: (I) With probability one, the players know T with precision ±K. (ii) With probability 1 −ε, the players know T precisely; moreover, this knowledge is mutual of order εT. (iii) The deviation of T from its finite expectation is exponentially small.  相似文献   

12.
Integration is a buzzword in manufacturing strategies for global competitiveness. However, some fundamental questions have yet to be answered scientifically; namely, what is integration, why integrate, and what should be integrated? How do existing integration models approach the problems of integration and to what extent are they successful? A theory-based model of information requirements for integrated manufacturing is needed to answer these questions properly. Such a model can unify hitherto narrowly defined and domainoriented approaches. We suggest a paradigm of parallel formulation and use it to formulate the information requirements for integration, including data and knowledge classes, decision spaces, and logical interactions.  相似文献   

13.
In an earlier issue of Decision Sciences, Jesse, Mitra, and Cox [1] examined the impact of inflationary conditions on the economic order quantity (EOQ) formula. Specifically, the authors analyzed the effect of inflation on order quantity decisions by means of a model that takes into account both inflationary trends and time discounting (over an infinite time horizon). In their analysis, the authors utilized two models: Current-dollars model and Constant-dollars model. These models were derived, of course, by setting up a total cost equation in the usual manner then finding the optimum order quantity that minimizes the total cost. Jesse, Mitra, and Cox [1] found that EOQ is approximately the same under both conditions; with or without inflation. However, we disagree with the conclusion drawn by [2] and show that EOQ will be different under inflationary conditions, provided that the inflationary conditions are properly accounted for in the formulation of the total cost model.  相似文献   

14.
We test the portability of level‐0 assumptions in level‐k theory in an experimental investigation of behavior in Coordination, Discoordination, and Hide and Seek games with common, non‐neutral frames. Assuming that level‐0 behavior depends only on the frame, we derive hypotheses that are independent of prior assumptions about salience. Those hypotheses are not confirmed. Our findings contrast with previous research which has fitted parameterized level‐k models to Hide and Seek data. We show that, as a criterion of successful explanation, the existence of a plausible model that replicates the main patterns in these data has a high probability of false positives.  相似文献   

15.
Recovery of interdependent infrastructure networks in the presence of catastrophic failure is crucial to the economy and welfare of society. Recently, centralized methods have been developed to address optimal resource allocation in postdisaster recovery scenarios of interdependent infrastructure systems that minimize total cost. In real-world systems, however, multiple independent, possibly noncooperative, utility network controllers are responsible for making recovery decisions, resulting in suboptimal decentralized processes. With the goal of minimizing recovery cost, a best-case decentralized model allows controllers to develop a full recovery plan and negotiate until all parties are satisfied (an equilibrium is reached). Such a model is computationally intensive for planning and negotiating, and time is a crucial resource in postdisaster recovery scenarios. Furthermore, in this work, we prove this best-case decentralized negotiation process could continue indefinitely under certain conditions. Accounting for network controllers' urgency in repairing their system, we propose an ad hoc sequential game-theoretic model of interdependent infrastructure network recovery represented as a discrete time noncooperative game between network controllers that is guaranteed to converge to an equilibrium. We further reduce the computation time needed to find a solution by applying a best-response heuristic and prove bounds on ε-Nash equilibrium, where ε depends on problem inputs. We compare best-case and ad hoc models on an empirical interdependent infrastructure network in the presence of simulated earthquakes to demonstrate the extent of the tradeoff between optimality and computational efficiency. Our method provides a foundation for modeling sociotechnical systems in a way that mirrors restoration processes in practice.  相似文献   

16.
This paper considers inference in a broad class of nonregular models. The models considered are nonregular in the sense that standard test statistics have asymptotic distributions that are discontinuous in some parameters. It is shown in Andrews and Guggenberger (2009a) that standard fixed critical value, subsampling, and m out of n bootstrap methods often have incorrect asymptotic size in such models. This paper introduces general methods of constructing tests and confidence intervals that have correct asymptotic size. In particular, we consider a hybrid subsampling/fixed‐critical‐value method and size‐correction methods. The paper discusses two examples in detail. They are (i) confidence intervals in an autoregressive model with a root that may be close to unity and conditional heteroskedasticity of unknown form and (ii) tests and confidence intervals based on a post‐conservative model selection estimator.  相似文献   

17.
We provide a tractable characterization of the sharp identification region of the parameter vector θ in a broad class of incomplete econometric models. Models in this class have set‐valued predictions that yield a convex set of conditional or unconditional moments for the observable model variables. In short, we call these models with convex moment predictions. Examples include static, simultaneous‐move finite games of complete and incomplete information in the presence of multiple equilibria; best linear predictors with interval outcome and covariate data; and random utility models of multinomial choice in the presence of interval regressors data. Given a candidate value for θ, we establish that the convex set of moments yielded by the model predictions can be represented as the Aumann expectation of a properly defined random set. The sharp identification region of θ, denoted ΘI, can then be obtained as the set of minimizers of the distance from a properly specified vector of moments of random variables to this Aumann expectation. Algorithms in convex programming can be exploited to efficiently verify whether a candidate θ is in ΘI. We use examples analyzed in the literature to illustrate the gains in identification and computational tractability afforded by our method.  相似文献   

18.
Louis Anthony Cox  Jr 《Risk analysis》2008,28(6):1749-1761
Several important risk analysis methods now used in setting priorities for protecting U.S. infrastructures against terrorist attacks are based on the formula: Risk=Threat×Vulnerability×Consequence. This article identifies potential limitations in such methods that can undermine their ability to guide resource allocations to effectively optimize risk reductions. After considering specific examples for the Risk Analysis and Management for Critical Asset Protection (RAMCAP?) framework used by the Department of Homeland Security, we address more fundamental limitations of the product formula. These include its failure to adjust for correlations among its components, nonadditivity of risks estimated using the formula, inability to use risk‐scoring results to optimally allocate defensive resources, and intrinsic subjectivity and ambiguity of Threat, Vulnerability, and Consequence numbers. Trying to directly assess probabilities for the actions of intelligent antagonists instead of modeling how they adaptively pursue their goals in light of available information and experience can produce ambiguous or mistaken risk estimates. Recent work demonstrates that two‐level (or few‐level) hierarchical optimization models can provide a useful alternative to Risk=Threat×Vulnerability×Consequence scoring rules, and also to probabilistic risk assessment (PRA) techniques that ignore rational planning and adaptation. In such two‐level optimization models, defender predicts attacker's best response to defender's own actions, and then chooses his or her own actions taking into account these best responses. Such models appear valuable as practical approaches to antiterrorism risk analysis.  相似文献   

19.
This paper studies the estimation of dynamic discrete games of incomplete information. Two main econometric issues appear in the estimation of these models: the indeterminacy problem associated with the existence of multiple equilibria and the computational burden in the solution of the game. We propose a class of pseudo maximum likelihood (PML) estimators that deals with these problems, and we study the asymptotic and finite sample properties of several estimators in this class. We first focus on two‐step PML estimators, which, although they are attractive for their computational simplicity, have some important limitations: they are seriously biased in small samples; they require consistent nonparametric estimators of players' choice probabilities in the first step, which are not always available; and they are asymptotically inefficient. Second, we show that a recursive extension of the two‐step PML, which we call nested pseudo likelihood (NPL), addresses those drawbacks at a relatively small additional computational cost. The NPL estimator is particularly useful in applications where consistent nonparametric estimates of choice probabilities either are not available or are very imprecise, e.g., models with permanent unobserved heterogeneity. Finally, we illustrate these methods in Monte Carlo experiments and in an empirical application to a model of firm entry and exit in oligopoly markets using Chilean data from several retail industries.  相似文献   

20.
《Risk analysis》2018,38(4):804-825
Economic consequence analysis is one of many inputs to terrorism contingency planning. Computable general equilibrium (CGE) models are being used more frequently in these analyses, in part because of their capacity to accommodate high levels of event‐specific detail. In modeling the potential economic effects of a hypothetical terrorist event, two broad sets of shocks are required: (1) physical impacts on observable variables (e.g., asset damage); (2) behavioral impacts on unobservable variables (e.g., investor uncertainty). Assembling shocks describing the physical impacts of a terrorist incident is relatively straightforward, since estimates are either readily available or plausibly inferred. However, assembling shocks describing behavioral impacts is more difficult. Values for behavioral variables (e.g., required rates of return) are typically inferred or estimated by indirect means. Generally, this has been achieved via reference to extraneous literature or ex ante surveys. This article explores a new method. We elucidate the magnitude of CGE‐relevant structural shifts implicit in econometric evidence on terrorist incidents, with a view to informing future ex ante event assessments. Ex post econometric studies of terrorism by Blomberg et al . yield macro econometric equations that describe the response of observable economic variables (e.g., GDP growth) to terrorist incidents. We use these equations to determine estimates for relevant (unobservable) structural and policy variables impacted by terrorist incidents, using a CGE model of the United States. This allows us to: (i) compare values for these shifts with input assumptions in earlier ex ante CGE studies; and (ii) discuss how future ex ante studies can be informed by our analysis.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号