首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The explanatory potential of four forms of expectancy theory with additive and multiplicative expectancy terms and linear and nonlinear valence functions were contrasted. A behavioral decision-making theory approach was used when 101 subjects were asked to make 128 hypothetical job-choice decisions. More than 25,800 decisions under a within-subjects framework were analyzed. Results indicate that the majority (83 percent) of subjects employed additive or multiplicative expectancy models with linear valence functions. However, the predictive efficacy of the expectancy theory model was improved for 17 percent of the subjects when nonlinear valence terms were introduced. The findings imply that different functional forms of expectancy theory may be needed to model individuals' decision-making processes.  相似文献   

2.
A. Schepanski 《决策科学》1983,14(4):503-512
Previous experimental judgment research in accounting has been interpreted as supportive of the linear model as an appropriate representation of decision-making behavior in nearly all of the tasks investigated. Moreover, nonlinear models have come to be viewed as adding relatively little predictive power over that provided by the linear model, even in tasks considered to be inherently nonlinear. These conclusions were based largely upon evaluating the predictive ability of the linear model in terms of statistics measuring the proportion of variance accounted for by the model. In the present paper it is argued that, since these statistics are not independent of the experimental design, it is not clear whether the high correlations are indicative of the model's success in representing decision-making processes or instead are more the result of various features of the experimental design. It is suggested that correlational tests be supplemented with qualitative tests of the predictive ability of a model. Implications for accounting are discussed.  相似文献   

3.
Four discriminant models were compared in a simulation study: Fisher's linear discriminant function [14], Smith's quadratic discriminant function [34], the logistic discriminant model, and a model based on linear programming [17]. The study was conducted to estimate expected rates of misclassification for these four procedures when observations were sampled from a variety of normal and nonnormal distributions. In contrast to previous research, data were taken from four types of Kurtotic population distributions. The results indicate the four discriminant procedures are robust toward data from many types of distributions. The misclassification rates for both the logistic discriminant model and the formulation based on linear programming consistently decreased as the kurtosis in the data increased. The decreases, however, were of small magnitude. None of these procedures yielded statistically significant lower rates of misclassification under nonnormality. The quadratic discriminant function produced significantly lower error rates when the variances across groups were heterogeneous.  相似文献   

4.
Multiattribute utility theory (MAUT) was employed to model the professional judgments of external auditors. Fully developed MAUT models elicited from each subject according to keeney and Raiffa's [6] procedures were used to predict the internal control systems evaluations made by auditor-subjects. Correlation analyses were used to compare the predictive ability of the “correct” MAUT models to the accuracy of models developed under simplifications of the MAUT procedures. One simplified model resulted from relaxing the requirements for attribute independence that determine the functional forms. A second modified MAUT function was formed using unitary weightings on conditional utility functions instead of elicited scaling constants. Tests showed essentially no significant differences in predictive accuracy among the models in the contact of this study.  相似文献   

5.
A continuing gap exists between the capabilities of sophisticated computer-based information systems and the extent to which these systems are used by individuals. Studies which have examined the relationship between system utilization and various user, system, implementation, and organizational variables have provided few consistent findings. A new approach to this topic is suggested by a recent study by Davis, Bagozzi, and Warshaw [11], which indicates that individuals' intentions to use a system determine subsequent use. A large body of psychology-based research also supports this relationship between behavioral intentions and subsequent behavior. This study employs expectancy theory, which has often been used to examine behavioral intentions, to explain managers' intentions to use a decision support system (DSS). The results imply that the variables of the expectancy force model are determinants of a manager's behavioral intentions to use a DSS, and the variables of the expectancy valence model are determinants of the attractiveness of using a DSS to a manager.  相似文献   

6.
A recent article [3] proposed a definable relationship between production competence and business performance and presented empirical evidence to support the relationship. The purpose of this note is four-fold. First, it corrects the authors' numerical measure of production competence. The correction changes the nature of the relationship between competence and performance. Second, this note suggests an improved numerical measure of business performance (the dependent variable in the study). The authors of [3] defined performance in a manner which inadvertently captures elements used to measure production competence (the independent variable). The result is a deceptively close fit of the authors' model with the data. The third purpose of the note is to introduce a more appropriate theoretical framework for the production competence construct. It is shown that production competence is closely related to the formulation and implementation of manufacturing strategy and can best be understood within that context. Last, an alternative conceptual model of the relationship between business strategy, production competence, and business performance is presented. The new model includes a construct which measures the “fit” of a firm's business strategy to its external, competitive environment.  相似文献   

7.
This paper presents a new method for the analysis of moral hazard principal–agent problems. The new approach avoids the stringent assumptions on the distribution of outcomes made by the classical first‐order approach and instead only requires the agent's expected utility to be a rational function of the action. This assumption allows for a reformulation of the agent's utility maximization problem as an equivalent system of equations and inequalities. This reformulation in turn transforms the principal's utility maximization problem into a nonlinear program. Under the additional assumptions that the principal's expected utility is a polynomial and the agent's expected utility is rational in the wage, the final nonlinear program can be solved to global optimality. The paper also shows how to first approximate expected utility functions that are not rational by polynomials, so that the polynomial optimization approach can be applied to compute an approximate solution to nonpolynomial problems. Finally, the paper demonstrates that the polynomial optimization approach extends to principal–agent models with multidimensional action sets.  相似文献   

8.
The purpose of this research is to show the usefulness of three relatively simple nonlinear classification techniques for policy-capturing research where linear models have typically been used. This study uses 480 cases to assess the decision-making process used by 24 experienced national bank examiners in classifying commercial loans as acceptable or questionable. The results from multiple discriminant analysis (a linear technique) are compared to those of chi-squared automatic interaction detector analysis (a search technique), log-linear analysis, and logit analysis. Results show that while the four techniques are equally accurate in predicting loan classification, chi-squared automatic interaction detector analysis (CHAID) and log-linear analysis enable the researcher to analyze the decision-making structure and examine the “human” variable within the decision-making process. Consequently, if the sole purpose of research is to predict the decision maker's decisions, then any one of the four techniques turns out to be equally useful. If, however, the purpose is to analyze the decision-making process as well as to predict decisions, then CHAID or log-linear techniques are more useful than linear model techniques.  相似文献   

9.
Torrance [4] has proposed a “new” approach to finding an initial solution of a linear programming problem for use in conjunction with the dual simplex algorithm. The purpose of this note is to comment on two aspects of that paper. Firstly, Torrance's method is not new at all, but was proposed in 1958 by Wagner. Secondly, the method can be implemented much more efficiently than Torrance suggests.  相似文献   

10.
Online sales platforms have grown substantially in recent years. These platforms assist sellers to conduct sales, and in return, collect service fees from sellers. We study the fee policies by considering a fee‐setting platform, on which a seller may conduct a sale with a reserve price to a group of potential buyers: the seller retains the object for sale if the final trading price is below the reserve price. The platform may charge two types of fees as in current practice: a reserve fee as a function of the seller's reserve price and a final value fee as a function of the sale's final trading price. We derive the optimality condition for fee policies, and show that the platform can use either just a final value fee or just a reserve fee to achieve optimality. In the former case, the optimal final value fee charged by the platform is independent of the number of buyers. In the latter case, the optimal reserve fee is often a decreasing, instead of increasing, function of the seller's reserve price. An increasing reserve fee may make the seller reluctant to use a positive reserve price and hurt the platform's revenue. In general, the optimal fees are nonlinear functions, but in reality, linear fees are commonly used because of their simplicity for implementation. We show that a linear fee policy is indeed optimal in the case that the seller's valuation follows a power distribution. In other cases, our numerical analysis suggests close‐to‐optimal performance of the linear policy.  相似文献   

11.
Elton, Gruber, and Padberg's [2] [3] ranking procedure and Kwan's [6] nonranking procedure for optimal portfolio selection lead to the same solution. This is because of a particular functional property of the cutoff rate for security performance. In this note, the robustness of that functional property is demonstrated the normality of security returns assumed in the above studies is relaxed to encompass the general family of stable Paretian distributions. The proof here is an important step toward portfolio analysis using some multiindex models when securities cannot be ranked.  相似文献   

12.
Since its publication, Palda's [12] initial work which supported the existence of a lagged relationship between advertising expenditures and sales has been frequently discussed and criticized. That criticism has been directed at both methodological and structural issues. This paper is an attempt to answer some of the questions which have been raised regarding the structural issues concerning the role of autocorrelation in Palda's results and to develop the implications of these structural issues for the marketing manager.  相似文献   

13.
In this article, we develop statistical models to predict the number and geographic distribution of fires caused by earthquake ground motion and tsunami inundation in Japan. Using new, uniquely large, and consistent data sets from the 2011 Tōhoku earthquake and tsunami, we fitted three types of models—generalized linear models (GLMs), generalized additive models (GAMs), and boosted regression trees (BRTs). This is the first time the latter two have been used in this application. A simple conceptual framework guided identification of candidate covariates. Models were then compared based on their out‐of‐sample predictive power, goodness of fit to the data, ease of implementation, and relative importance of the framework concepts. For the ground motion data set, we recommend a Poisson GAM; for the tsunami data set, a negative binomial (NB) GLM or NB GAM. The best models generate out‐of‐sample predictions of the total number of ignitions in the region within one or two. Prefecture‐level prediction errors average approximately three. All models demonstrate predictive power far superior to four from the literature that were also tested. A nonlinear relationship is apparent between ignitions and ground motion, so for GLMs, which assume a linear response‐covariate relationship, instrumental intensity was the preferred ground motion covariate because it captures part of that nonlinearity. Measures of commercial exposure were preferred over measures of residential exposure for both ground motion and tsunami ignition models. This may vary in other regions, but nevertheless highlights the value of testing alternative measures for each concept. Models with the best predictive power included two or three covariates.  相似文献   

14.
In this paper, I consider a dynamic economy in which a government needs to finance a stochastic process of purchases. The agents in the economy are privately informed about their skills, which evolve stochastically over time; I impose no restriction on the stochastic evolution of skills. I construct a tax system that implements a symmetric constrained Pareto optimal allocation. The tax system is constrained to be linear in an agent's wealth, but can be arbitrarily nonlinear in his current and past labor incomes. I find that wealth taxes in a given period depend on the individual's labor income in that period and previous ones. However, in any period, the expectation of an agent's wealth tax rate in the following period is zero. As well, the government never collects any net revenue from wealth taxes.  相似文献   

15.
A recent Decision Sciences article by Jordan [9] presented a Markov-chain model of a just-in-time (JIT) production line. This model was used to estimate average inventories and production rates to find the optimal number of kanbans. Results for expected production rate were found to be consistently lower than those obtained by Huang, Rees, and Taylor [8] in a previous Decision Sciences article. Jordan attributed this unexpected outcome to some procedural problems in Huang et al.'s simulation methodology. In this paper, Markov-numerical analysis is used to compare the performance of Jordan's and Huang et al.'s methods of production control. Simulation analysis is then used to determine the effects of finite withdrawal cycle times. Results show that, for equal numbers of kanbans, Huang et al.'s two-card method of production control provides substantially greater expected production rates than Jordan's method. These results suggest that the Jordan model should not be applied to the problem of setting kanban numbers on manual JIT lines. Finally, we comment on the efficiency of Jordan's iterative method to obtain performance measures of tandem queues.  相似文献   

16.
The problem of reducing project duration efficiently arises frequently, routinely, and repetitively in government and industry. Siemens [1] has presented an inherently simple time-cost tradeoff algorithm (SAM—for Siemens Approximation Method) for determining which activities in a project network must be shortened to meet an externally imposed (scheduled) completion date (which occurs prior to the current expected completion date). In that paper the network activities of the example problem all have constant cost-slopes. Siemens mentions that the algorithm can be used where the activities have (convex) nonlinear cost-slopes—instead of just one cost-slope and one supply (time available for shortening) for each activity, there can be multiple cost-slope/supply pairs for each activity. This technique is illustrated in this paper. Also illustrated here is an improvement suggested by Goyal [2]. In step 12 of the original algorithm Siemens suggests a review of the solution obtained by the first eleven steps to eliminate any unnecessary shortening. Goyal's modification does this systematically during application of the algorithm by de-shortening (partially or totally) selected activities which were shortened in a prior iteration. He claims that, empirically at least, the technique always yields an optimal solution. Our experience verifies this claim (given the assumption of convex cost functions). The authors have modified the original algorithm so that the requirement for convex cost functions can now be relaxed. Unfortunately, this modification is made only at the expense of simplicity. To further complicate matters we found that Goyal's technique does not always yield an optimal solution when concave functions are involved and thus still another modification was required. These are discussed in detail below. Finally, we discuss the applicability of the algorithm to situations involving discrete time-cost functions.  相似文献   

17.
A modification of Huxley's [3] mail response model is proposed. This new approach retains the simplicity and intuitiveness of Huxley's technique and leads to statistically valid conclusions. Using this model, a procedure is developed to find the optimal number of questionnaires that should be mailed in order to meet some prespecified target.  相似文献   

18.
Subjective probability distributions constitute an important part of the input to decision analysis and other decision aids. The long list of persistent biases associated with human judgments under uncertainy [16] suggests, however, that these biases can be translated into the elicited probabilities which, in turn, may be reflected in the output of the decision aids, potentially leading to biased decisions. This experiment studies the effectiveness of three debiasing techniques in elicitation of subjective probability distributions. It is hypothesized that the Socratic procedure [18] and the devil's advocate approach [6] [7] [31] [32] [33] [34] will increase subjective uncertainty and thus help assessors overcome a persistent bias called “overconfidence.” Mental encoding of the frequency of the observed instances into prespecified intervals, however, is expected to decrease subjective uncertainty and to help assessors better capture, mentally, the location and skewness of the observed distribution. The assessors' ratings of uncertainty confirm these hypotheses related to subjective uncertainty but three other measures based on the dispersion of the elicited subjective probability distributions do not. Possible explanations are discussed. An intriguing explanation is that debiasing may affect what some have called “second order” uncertainty. While uncertainty ratings may include this second component, the measures based on the elicited distributions relate only to “first order” uncertainty.  相似文献   

19.
Critics of previous laboratory experiments comparing devil's advocacy (DA) to dialectical inquiry (DI) have suggested that these experiments produced misleading results because (1) they used subjects who had low levels of task involvement and (2) the DI treatment used was confusing to subjects and required further explanation to be useful. The present study examines the effects of four inquiry methods—expert (E), DA, DI, and DI with explanatory statement (DI+)—on subjects' performance at a financial prediction task. Results show that DA, DI, and DI + were superior to E when the state of the world differed significantly from assumptions underlying the expert's plan. For subjects with high task involvement, DI and DI + were more effective than E and DA. The results support some of the criticisms of previous laboratory research and suggest that future research on these decision aids should include task involvement as a factor.  相似文献   

20.
Robert L. Winkler's paper [1] provides a comprehensive overview of challenging research areas for decision making under uncertainty. Hence, rather than try to extend the list of research areas identified, this note will attempt to embellish some that I feel are particularly important. In these areas, I feel the value of systematic research is particularly high. For convenience, the discussion will be organized under the four research categories identified by Winkler with a couple of sugestions following in an “implementation research” category. The reader will note, however, that many of the suggested topics actually relate to more than one research category.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号