首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 632 毫秒
1.
This paper examines cognitive considerations in developing model management systems (MMSs). First, two approaches to MMS design are reviewed briefly: one based on database theory and one based on knowledge-representation techniques. Then three major cognitive issues—human limitations, information storage and retrieval, and problem-solving strategies—and their implications for MMS design are discussed. Evidence indicates that automatic modeling, which generates more complicated models by integrating existing models automatically, is a critical function of model management systems. In order to discuss issues pertinent to automatic modeling, a graph-based framework for integrating models is introduced. The framework captures some aspects of the processes by which human beings develop models as route selections on a network of all possible alternatives. Based on this framework, three issues are investigated: (1) What are proper criteria for evaluating a model formulated by an MMS? (2) If more than one criterion is chosen for evaluation, how can evaluations on each of the criteria be combined to get an overall evaluation of the model? (3) When should a model be evaluated? Finally, examples are presented to illustrate various modeling strategies.  相似文献   

2.
Choice models and neural networks are two approaches used in modeling selection decisions. Defining model performance as the out‐of‐sample prediction power of a model, we test two hypotheses: (i) choice models and neural network models are equal in performance, and (ii) hybrid models consisting of a combination of choice and neural network models perform better than each stand‐alone model. We perform statistical tests for two classes of linear and nonlinear hybrid models and compute the empirical integrated rank (EIR) indices to compare the overall performances of the models. We test the above hypotheses by using data for various brand and store choices for three consumer products. Extensive jackknifing and out‐of‐sample tests for four different model specifications are applied for increasing the external validity of the results. Our results show that using neural networks has a higher probability of resulting in a better performance. Our findings also indicate that hybrid models outperform stand‐alone models, in that using hybrid models guarantee overall results equal or better than the two stand‐alone models. The improvement is particularly significant in cases where neither of the two stand‐alone models is very accurate in prediction, indicating that the proposed hybrid models may capture aspects of predictive accuracy that neither stand‐alone model is capable of on their own. Our results are particularly important in brand management and customer relationship management, indicating that multiple technologies and mixture of technologies may yield more accurate and reliable outcomes than individual ones.  相似文献   

3.
There has been an increasing interest in physiologically based pharmacokinetic (PBPK)models in the area of risk assessment. The use of these models raises two important issues: (1)How good are PBPK models for predicting experimental kinetic data? (2)How is the variability in the model output affected by the number of parameters and the structure of the model? To examine these issues, we compared a five-compartment PBPK model, a three-compartment PBPK model, and nonphysiological compartmental models of benzene pharmacokinetics. Monte Carlo simulations were used to take into account the variability of the parameters. The models were fitted to three sets of experimental data and a hypothetical experiment was simulated with each model to provide a uniform basis for comparison. Two main results are presented: (1)the difference is larger between the predictions of the same model fitted to different data se1ts than between the predictions of different models fitted to the dame data; and (2)the type of data used to fit the model has a larger effect on the variability of the predictions than the type of model and the number of parameters.  相似文献   

4.
JE Samouilidis 《Omega》1980,8(6):609-621
The Arab oil embargo in 1973 and the subsequent price rises and production restrictions have given birth to a distinct branch within Management Science: energy modelling. This paper gives a critical and selective review on energy modelling, an industry which though thriving in an era of general economic anxiety, is showing signs of arrogant immaturity. After giving a historical background, the paper classifies energy models into three groups: open loop demand or supply models; energy closed loop models; energy-economy closed loop models. For each group the problem area is analysed and some illustrative examples are described. In the last sections, an attempt is made to sum up the experience that has been gained with energy modelling: the basic deficiencies, the impact of this activity on policy formulation and its position within Management Science. It is concluded that energy models, though very poor forecasting devices, can be very useful to policy makers as tools for analysis; energy model developers must convince potential model users and for that purpose they can benefit immensely from the 35-year-long experience accumulated by their colleagues in Management Science.  相似文献   

5.
This paper examines the abilities of learning models to describe subject behavior in experiments. A new experiment involving multistage asymmetric‐information games is conducted, and the experimental data are compared with the predictions of Nash equilibrium and two types of learning model: a reinforcement‐based model similar to that used by Roth and Erev (1995), and belief‐based models similar to the ‘cautious fictitious play’ of Fudenberg and Levine (1995, 1998) These models make predictions that are qualitatively similar cycling around the Nash equilibrium that is much more apparent than movement toward it. While subject behavior is not adequately described by Nash equilibrium, it is consistent with the qualitative predictions of the learning models. We examine several criteria for quantitatively comparing the predictions of alternative models. According to almost all of these criteria, both types of learning model outperform Nash equilibrium. According to some criteria, the reinforcement‐based model performs better than any version of the belief‐based model; according to others, there exist versions of the belief‐based model that outperform the reinforcement‐based model. The abilities of these models are further tested with respect to the results of other published experiments. The relative performance of the two learning models depends on the experiment, and varies according to which criterion of success is used. Again, both models perform better than equilibrium in most cases.  相似文献   

6.
What on earth are economic theorists like me trying to accomplish? This paper discusses four dilemmas encountered by an economic theorist: The dilemma of absurd conclusions: Should we abandon a model if it produces absurd conclusions or should we regard a model as a very limited set of assumptions that will inevitably fail in some contexts? The dilemma of responding to evidence: Should our models be judged according to experimental results? The dilemma of modelless regularities: Should models provide the hypothesis for testing or are they simply exercises in logic that have no use in identifying regularities? The dilemma of relevance: Do we have the right to offer advice or to make statements that are intended to influence the real world?  相似文献   

7.
Rios J  Rios Insua D 《Risk analysis》2012,32(5):894-915
Recent large-scale terrorist attacks have raised interest in models for resource allocation against terrorist threats. The unifying theme in this area is the need to develop methods for the analysis of allocation decisions when risks stem from the intentional actions of intelligent adversaries. Most approaches to these problems have a game-theoretic flavor although there are also several interesting decision-analytic-based proposals. One of them is the recently introduced framework for adversarial risk analysis, which deals with decision-making problems that involve intelligent opponents and uncertain outcomes. We explore how adversarial risk analysis addresses some standard counterterrorism models: simultaneous defend-attack models, sequential defend-attack-defend models, and sequential defend-attack models with private information. For each model, we first assess critically what would be a typical game-theoretic approach and then provide the corresponding solution proposed by the adversarial risk analysis framework, emphasizing how to coherently assess a predictive probability model of the adversary's actions, in a context in which we aim at supporting decisions of a defender versus an attacker. This illustrates the application of adversarial risk analysis to basic counterterrorism models that may be used as basic building blocks for more complex risk analysis of counterterrorism problems.  相似文献   

8.
Dimensions of Risk Perception for Financial and Health Risks   总被引:1,自引:0,他引:1  
This study of 29 MBA students compares two models of risk perception for both financial and health risk stimuli. The first, inspired by Luce and Weber's Conjoint Expected Risk (CER) model, uses five dimensions: probability of gain, loss and status quo, and expected benefit and harm. The second, inspired by the Sovic et al. psychometric model, employs seven dimensions: voluntariness, dread, control, knowledge, catastrophic potential, novelty, and equity. The CER-type model provided a better fit for most subjects and stimuli. Adding the psychological risk dimensions from the Slovic et al. model explained only modestly more variance. Relationships between the dimensions of the two models are described and the construction of a hybrid model explored.  相似文献   

9.
Experimental animal studies often serve as the basis for predicting risk of adverse responses in humans exposed to occupational hazards. A statistical model is applied to exposure-response data and this fitted model may be used to obtain estimates of the exposure associated with a specified level of adverse response. Unfortunately, a number of different statistical models are candidates for fitting the data and may result in wide ranging estimates of risk. Bayesian model averaging (BMA) offers a strategy for addressing uncertainty in the selection of statistical models when generating risk estimates. This strategy is illustrated with two examples: applying the multistage model to cancer responses and a second example where different quantal models are fit to kidney lesion data. BMA provides excess risk estimates or benchmark dose estimates that reflects model uncertainty.  相似文献   

10.
This paper explores the significance of contemporary celebrity businesswomen as role models for women aspiring to leadership in business. We explore the kind of gendered ideals they model and promote to women through their autobiographical narratives, and analyse how these ideals map against a contemporary postfeminist sensibility to further understand the potential of these role models to redress the under‐representation of women in management and leadership. Our findings show that celebrity businesswomen present a role model that we call the ‘female hero’, a figure characterized by 3Cs: confidence to jump over gendered barriers; control in managing these barriers; and courage to push through them. We argue that the ‘female hero’ role model is deeply embedded in the contemporary postfeminist sensibility; it offers exclusively individualized solutions to inequality by calling on women to change themselves to succeed, and therefore has limited capacity to challenge the current gendered status quo in management and leadership. The paper contributes to current literature on role models by generating a more differentiated and socially situated understanding of distant female role models in business and extending our understanding of their potential to generate sustainable and long‐term change in advancing gendered change in management and leadership.  相似文献   

11.
Several authors have developed models for the EOQ when only a percentage of stockouts will be backordered. Most of these models are complicated, with equations unlike those for the EOQ with full backordering. In this paper we extend work by Pentico and Drake [The deterministic EOQ with partial backordering: a new approach. European Journal of Operational Research 2008; in press] that developed equations for the EOQ with partial backordering that are more like those for the EOQ with full backordering to develop a comparable model for the EPQ with partial backordering.  相似文献   

12.
Estimates of dermal dose from exposures to toxic chemicals are typically derived using models that assume instantaneous establishment of steady-state dermal mass flux. However, dermal absorption theory indicates that this assumption is invalid for short-term exposures to volatile organic chemicals (VOCs). A generalized distributed parameter physiologically-based pharmacokinetic model (DP-PBPK), which describes unsteady state dermal mass flux via a partial differential equation (Fickian diffusion), has been developed for inhalation and dermal absorption of VOCs. In the present study, the DP-PBPK model has been parameterized for chloroform, and compared with two simpler PBPK models of chloroform. The latter are lumped parameter models, employing ordinary differential equations, that do not account for the dermal absorption time lag associated with the accumulation of permeant chemical in tissue represented by permeability coefficients. All three models were evaluated by comparing simulated post-exposure exhaled breath concentration profiles with measured concentrations following environmental chloroform exposures. The DP-PBPK model predicted a time-lag in the exhaled breath concentration profile, consistent with the experimental data. The DP-PBPK model also predicted significant volatilization of chloroform, for a simulated dermal exposure scenario. The end-exposure dermal dose predicted by the DP-PBPK model is similar to that predicted by the EPA recommended method for short-term exposures, and is significantly greater than the end-exposure dose predicted by the lumped parameter models. However, the net dermal dose predicted by the DP-PBPK model is substantially less than that predicted by the EPA method, due to the post-exposure volatilization predicted by the DP-PBPK model. Moreover, the net dermal dose of chloroform predicted by all three models was nearly the same, even though the lumped parameter models did not predict substantial volatilization.  相似文献   

13.
We study several finite‐horizon, discrete‐time, dynamic, stochastic inventory control models with integer demands: the newsvendor model, its multi‐period extension, and a single‐product, multi‐echelon assembly model. Equivalent linear programs are formulated for the corresponding stochastic dynamic programs, and integrality results are derived based on the total unimodularity of the constraint matrices. Specifically, for all these models, starting with integer inventory levels, we show that there exist optimal policies that are integral. For the most general single‐product, multi‐echelon assembly system model, integrality results are also derived for a practical alternative to stochastic dynamic programming, namely, rolling‐horizon optimization by a similar argument. We also present a different approach to prove integrality results for stochastic inventory models. This new approach is based on a generalization we propose for the one‐dimensional notion of piecewise linearity with integer breakpoints to higher dimensions. The usefulness of this new approach is illustrated by establishing the integrality of both the dynamic programming and rolling‐horizon optimization models of a two‐product capacitated stochastic inventory control system.  相似文献   

14.
In weighted moment condition models, we show a subtle link between identification and estimability that limits the practical usefulness of estimators based on these models. In particular, if it is necessary for (point) identification that the weights take arbitrarily large values, then the parameter of interest, though point identified, cannot be estimated at the regular (parametric) rate and is said to be irregularly identified. This rate depends on relative tail conditions and can be as slow in some examples as n−1/4. This nonstandard rate of convergence can lead to numerical instability and/or large standard errors. We examine two weighted model examples: (i) the binary response model under mean restriction introduced by Lewbel (1997) and further generalized to cover endogeneity and selection, where the estimator in this class of models is weighted by the density of a special regressor, and (ii) the treatment effect model under exogenous selection (Rosenbaum and Rubin (1983)), where the resulting estimator of the average treatment effect is one that is weighted by a variant of the propensity score. Without strong relative support conditions, these models, similar to well known “identified at infinity” models, lead to estimators that converge at slower than parametric rate, since essentially, to ensure point identification, one requires some variables to take values on sets with arbitrarily small probabilities, or thin sets. For the two models above, we derive some rates of convergence and propose that one conducts inference using rate adaptive procedures that are analogous to Andrews and Schafgans (1998) for the sample selection model.  相似文献   

15.
Workforce equality has been an important organizational and societal goal for many years, and a number of strategies for achieving it have been recommended and used. Yet, differences in job performance and important job outcomes such as promotion, advancement, and compensation still exist among racioethnic groups. This situation is important for OB researchers to address. What do we know about the causes of these differences? The purpose of this paper is to review the literature on racioethnic group differences in performance and related outcomes, and the models used to explain group differences. We find that four models are used as explanatory frameworks for exploring group differences: the Internal Trait model, Bias and Discrimination model, Response to Discrimination model, and the Organizational Context model. We examine these models and summarize the evidence for each. Based on the review, implications of the models for future research and for the reduction of group differences are discussed.  相似文献   

16.
The alleviation of food-borne diseases caused by microbial pathogen remains a great concern in order to ensure the well-being of the general public. The relation between the ingested dose of organisms and the associated infection risk can be studied using dose-response models. Traditionally, a model selected according to a goodness-of-fit criterion has been used for making inferences. In this article, we propose a modified set of fractional polynomials as competitive dose-response models in risk assessment. The article not only shows instances where it is not obvious to single out one best model but also illustrates that model averaging can best circumvent this dilemma. The set of candidate models is chosen based on biological plausibility and rationale and the risk at a dose common to all these models estimated using the selected models and by averaging over all models using Akaike's weights. In addition to including parameter estimation inaccuracy, like in the case of a single selected model, model averaging accounts for the uncertainty arising from other competitive models. This leads to a better and more honest estimation of standard errors and construction of confidence intervals for risk estimates. The approach is illustrated for risk estimation at low dose levels based on Salmonella typhi and Campylobacter jejuni data sets in humans. Simulation studies indicate that model averaging has reduced bias, better precision, and also attains coverage probabilities that are closer to the 95% nominal level compared to best-fitting models according to Akaike information criterion.  相似文献   

17.
Despite a voluminous literature, business model research continues to be plagued with problems. Those problems hinder theory development and make it difficult for managers to use research findings in their decision-making. In our article, we seek to make three contributions. First, we clarify the theoretical foundations of the business model concept and relate them to the five elements of a business model: customers, value propositions, product/service offerings, value creation mechanisms, and value appropriation mechanisms. A clear definition of a business model enables theory to develop systematically and provides coherent guidance to managers. Second, we suggest that value configuration is a contingency variable that should be included in future theorizing and model building. Each of the elements of a business model is affected by a firm's value configuration depending on whether the firm is a value chain, value shop, or value network. Third, we link business models to organization design. We show how organization design is affected by value configuration and how new collaborative organizational forms enable open and agile business models. We derive the implications of our analysis for future research and management practice.  相似文献   

18.
The decision process involved in cleaning sites contaminated with hazardous, mixed, and radioactive materials is supported often by results obtained from computer models. These results provide limits within which a decision-maker can judge the importance of individual transport and fate processes, and the likely outcome of alternative cleanup strategies. The transport of hazardous materials may occur predominately through one particular pathway but, more often, actual or potential transport must be evaluated across several pathways and media. Multimedia models are designed to simulate the transport of contaminants from a source to a receptor through more than one environmental pathway. Three such multimedia models are reviewed here: MEPAS, MMSOILS, and PRESTO-EPA-CPG. The reviews are based on documentation provided with the software, on published reviews, on personal interviews with the model developers, and on model summaries extracted from computer databases and expert systems. The three models are reviewed within the context of specific media components: air, surface water, ground water, and food chain. Additional sections evaluate the way that these three models calculate human exposure and dose and how they report uncertainty. Special emphasis is placed on how each model handles radio-nuclide transport within specific media. For the purpose of simulating the transport, fate and effects of radioactive contaminants through more than one pathway, both MEPAS and PRESTO-EPA-CPG are adequate for screening studies; MMSOILS only handles nonradioactive substances and must be modified before it can be used in these same applications. Of the three models, MEPAS is the most versatile, especially if the user needs to model the transport, fate, and effects of hazardous and radioactive contaminants.  相似文献   

19.
In an earlier issue of Decision Sciences, Jesse, Mitra, and Cox [1] examined the impact of inflationary conditions on the economic order quantity (EOQ) formula. Specifically, the authors analyzed the effect of inflation on order quantity decisions by means of a model that takes into account both inflationary trends and time discounting (over an infinite time horizon). In their analysis, the authors utilized two models: Current-dollars model and Constant-dollars model. These models were derived, of course, by setting up a total cost equation in the usual manner then finding the optimum order quantity that minimizes the total cost. Jesse, Mitra, and Cox [1] found that EOQ is approximately the same under both conditions; with or without inflation. However, we disagree with the conclusion drawn by [2] and show that EOQ will be different under inflationary conditions, provided that the inflationary conditions are properly accounted for in the formulation of the total cost model.  相似文献   

20.
An interview with nationally known futurist Leland Kaiser, PhD, on the changes physician executives are likely to face as a result of the coming dislocation in the health professions. Or will there be a shrinking career pie at all? The real question is: What new mental models are we going to use and as a result of the new models, what new jobs are going to be created that will ameliorate some of the surplus we've created in the old model? Dr. Kaiser predicts a model will soon emerge that will open a myriad of new career opportunities for physicians. The new model he foresees is community-based medicine.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号