首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Techniques used in decision sciences and business research to estimate interactions between latent variables are limited in controlling for measurement error. This article uses a latent structure modeling approach that substantially controls for measurement error in nonlinear relationships. The results of this technique are compared to the results obtained applying hierarchical regression analysis and the impact of measurement error is assessed. The paper provides a unique assessment of the validity of the multi-attribute attitude model. The validity of the multiplicative rule in the model is supported.  相似文献   

2.
A logit model approach designed to assign a probability whether prospective jurors will favor the defendant or plaintiff in a case, as a function of perceived individual juror characteristics, is described in the context of a case situation. The model, which has been programmed on a hand-held computer, is designed for implementation in courtroom settings to help defense attorneys evaluate and select jurors in order to minimize the likelihood of large jury awards. Empirical tests of the model are also described.  相似文献   

3.
Industrial robots are increasingly used by many manufacturing firms. The number of robot manufacturers has also increased with many of these firms now offering a wide range of models. A potential user is thus faced with many options in both performance and cost. This paper proposes a decision model for the robot selection problem. The proposed model uses robust regression to identify, based on manufacturers' specifications, the robots that are the better performers for a given cost. Robust regression is used because it identifies and is resistant to the effects of outlying observations, key components in the proposed model. The robots selected by the model become candidates for testing to verify manufacturers' specifications. The model is tested on a real data set and an example is presented.  相似文献   

4.
Standard errors of the coefficients of a logistic regression (a binary response model) based on the asymptotic formula are compared to those obtained from the bootstrap through Monte Carlo simulations. The computer intensive bootstrap method, a nonparametric alternative to the asymptotic estimate, overestimates the true value of the standard errors while the asymptotic formula underestimates it. However, for small samples the bootstrap estimates are substantially closer to the true value than their counterpart derived from the asymptotic formula. The methodology is discussed using two illustrative data sets. The first example deals with a logistic model explaining the log-odds of passing the ERA amendment by the 1982 deadline as a function of percent of women legislators and the percent vote for Reagan. In the second example, the probability that an ingot is ready to roll is modelled using heating time and soaking time as explanatory variables. The results agree with those obtained from the simulations. The value of the study to better decision making through accurate statistical inference is discussed.  相似文献   

5.
This article provides decision makers with a method of determining the variability and acceptability of a major capital investment. The model used here differs from previous models in that it does not use simulation, nor does it require a normal distribution for the cash flow component. Further, it has no restrictions on whether cash flows are dependent. An example of the technique is included.  相似文献   

6.
A decision regarding development and introduction of a potential new product depends, in part, on the intensity of compeitition anticipated in the marketplace. In the case of a technology-based product such as a personal computer (PC), the number of competing products may be very dynamic and consequently uncertain. We address this problem by modeling growth in the number of new PCs as a stochastic counting process, incorporating product entries and exits. We demonstrate how to use the resulting model to forecast competition five years in advance.  相似文献   

7.
This paper presents a new linear model methodology for clustering judges with homogeneous decision policies and differentiating dimensions which distinguish judgment policies. This linear policy capturing model based on canonical correlation analysis is compared to the standard model based on regression analysis and hierarchical agglomerative clustering. Potential advantages of the new methodology include simultaneous instead of sequential consideration of information in the dependent and independent variable sets, decreased interpretational difficulty in the presence of multicollinearity and/or suppressor/moderator variables, and a more clearly defined solution structure allowing assessment of a judge's relationship to all of the derived, ideal policy types. An application to capturing policies of information systems recruiters responsible for hiring entry-level personnel is used to compare and contrast the two techniques.  相似文献   

8.
This paper presents point and interval estimators of both long-run and single-period target quantities in a simple cost-volume-profit (C-V-P) model. This model is a stochastic version of the “accountant's break-even chart” where the major component is a semivariable cost function. Although these features suggest obvious possibilities for practical application, a major purpose of this paper is to examine the statistical properties of target quantity estimators in C-V-P analysis. It is shown that point estimators of target quantity are biased and possess no moments of positive order, but are consistent. These properties are also shared by previous break-even models, even when all parameters are assumed known with certainty. After a test for positive variable margins, Fieller's [6] method is used to obtain interval estimators of relevant target quantities. This procedure therefore minimizes possible ambiguities in stochastic break-even analysis (noted by Ekern [3]).  相似文献   

9.
The perceived usefulness of information is an important construct for the design of management information systems. Yet an examination of existing measures of perceived usefulness shows that the instruments developed have not been validated nor has their reliability been verified. In this paper a new instrument for measuring two dimensions of perceived usefulness is developed. The results of an empirical study designed to test the reliability and construct validity of this instrument in a capital-budgeting setting are presented.  相似文献   

10.
Optimal linear discriminant models maximize percentage accuracy for dichotomous classifications, but are rarely used because a theoretical framework that allows one to make valid statements about the statistical significance of the outcomes of such analyses does not exist. This paper describes an analytic solution for the theoretical distribution of optimal values for univariate optimal linear discriminant analysis, under the assumption that the data are random and continuous. We also present the theoretical distribution for sample sizes up to N= 30. The discovery of a statistical framework for evaluating the performance of optimal discriminant models should greatly increase their use by scientists in all disciplines.  相似文献   

11.
A preference-order recursion algorithm for obtaining a relevant subset of pure, admissible (non-dominated, efficient) decision functions which converges towards an optimal solution in statistical decision problems is proposed. The procedure permits a decision maker to interactively express strong binary preferences for partial decision functions at each stage of the recursion, from which an imprecise probability and/or utility function is imputed and used as one of several pruning mechanisms to obtain a reduced relevant subset of admissible decision functions or to converge on an optimal one. The computational and measurement burden is thereby mitigated significantly, for example, by not requiring explicit or full probability and utility information from the decision maker. The algorithm is applicable to both linear and nonlinear utility functions. The results of behavioral and computational experimentation show that the approach is viable, efficient, and robust.  相似文献   

12.
Janssen and Daniel analyzed the choice between a one- or a two-point conversion for a particular game situation in college football. Their decision criteria was maximum expected utility based on a von Neumann-Morgenstern utility function defined over the games outcomes. An alternative approach based on a stochastic dominance criterion is presented that does not rely on knowledge of the relative importance of tying vs. winning; rather, it relies on a notion of consistency in the sequential problem.  相似文献   

13.
Certain business practices include legal but ethically questionable activities. Surveys intended to determine the nature and extent of such activities must employ questioning methods which mitigate the inherent threat of sensitive questions and account for social desirability effects. This study uses a national mail survey of chief executive officers (CEOs) of manufacturing firms to compare the performance of direct questioning, scenario, and randomized response methods for estimating the prevalence of several sensitive business practices. The direct questioning and scenario versions used self-reporting (individual-based) questions, as well as the CEO's perceptions of the extent to which others engage in questionable activities (other-based). In general, the estimates of the prevalence of selected questionable activities were lowest when the individual-based direct questioning was used and highest when other-based (either direct questioning or scenario) methods were used. The individual- based scenario and randomized response estimates represented intermediate estimates. Suggested guidelines for using the three methods for eliciting sensitive information are discussed.  相似文献   

14.
It is difficult to devise a statistical test to detect one student copying from another. Many prior efforts falsely accuse students of cheating. A general methodology based on the quantal choice model of decision theory overcomes these problems. Three steps are involved: (1) for each item estimate the probability and the variance that a given respondent will select each response, (2) for pairs of respondents, these probabilities determine the expected number of matches, and (3) compare the critical value to the number of items matched. Methods differ based on the probability estimation technique. Four methods (simple frequencies, Frary, Tideman, and Watts modification [8], logit, and multinomial probit) are compared on theoretical and empirical grounds. Theory and results show that it is crucial to incorporate the variance of the probability estimates. The probit model has theoretical advantages over the other methods and produces more accurate results.  相似文献   

15.
There are numerous variable selection rules in classical discriminant analysis. These rules enable a researcher to distinguish significant variables from nonsignificant ones and thus provide a parsimonious classification model based solely on significant variables. Prominent among such rules are the forward and backward stepwise variable selection criteria employed in statistical software packages such as Statistical Package for the Social Sciences and BMDP Statistical Software. No such criterion currently exists for linear programming (LP) approaches to discriminant analysis. In this paper, a criterion is developed to distinguish significant from nonsignificant variables for use in LP models. This criterion is based on the “jackknife” methodology. Examples are presented to illustrate implementation of the proposed criterion.  相似文献   

16.
A methodology for determining a von Neumann-Morgenstern utility function is outlined based on the axioms crucial to such a function. Reconciliation of inconsistent judgments is facilitated using the theory of reciprocal matrices. Numerical measures of the collective divergence of a set of judgments from perfect consistency or coherency are provided.  相似文献   

17.
The profusion of robot designs, the cost of testing, and the fact that robot operational parameter maximums are often mutually exclusive are factors that create a complex selection decision for the potential user. While formal robot testing standards are now in place, formal techniques to select robots for the testing process have not been addressed. A linear goal programming model is an effective tool for the decision maker for optimizing the robot selection process in terms of requirement priorities. It is also shown that this model provides a more stable result than the ordinary least squares estimator in the presence of statistical outliers of robot parameters. The methodology is illustrated through the use of current robot specifications.  相似文献   

18.
We address the situation of a firm that needs to dispose of a large, expensive asset (e.g., car, machine tool, earth mover, turbine, house, airplane), with or without a given deadline (and either known or unknown to the buyer). If a deadline exists, the asset is salvaged at a known value which may be zero, or even negative if there is a disposal cost. The asset has a known holding cost and may also have an initial nominal (undiscounted) price. The question is how, if at all, the price should be discounted as time progresses to maximize the expected proceeds. We use a dynamic recursion where each decision stage can be optimized based on classic economic monopoly pricing theory with a demand intensity function estimated from sales data, and show that the model is well‐behaved in the sense that the optimal price and optimal expected revenue monotonically decline as the deadline approaches. We test the model by comparing its optimal price pattern to the official pricing policy practiced at a used‐car dealer. We then extend the model to situations where the buyer knows the seller's deadline and thus may alter his behavior as the deadline approaches.  相似文献   

19.
A neural network model that processes input data consisting of financial ratios is developed to predict the financial health of thrift institutions. The network's ability to discriminate between healthy and failed institutions is compared to a traditional statistical model. The differences and similarities in the two modelling approaches are discussed. The neural network, which uses the same financial data, requires fewer assumptions, achieves a higher degree of prediction accuracy, and is more robust.  相似文献   

20.
This paper proposes a decision rule to rank actions under strict uncertainty, the available information being limited to the states of nature, the set of alternative rows, and the consequence of choosing every row if a given state occurs. This rule is suitable to moderately pessimistic individuals and social groups, these agents being neither maximax nor maximin decision makers but people who assume that the best outcome from the action will not occur. For these decision makers the paper shows the existence of a consistent weight system in which one and only one weight is attached to each state of the world under plausible conditions of domination. Most of the traditional axioms are satisfied by the proposed ranking approach. In the frame of disappointment (measured by ranges of column dispersion), the meaning of some controversial postulates used in the literature is explained. The proposed criterion is a departure from Laplace's (1825) rule and from the remaining standard criteria. Only in the special case of equal column dispersion do both Laplace's rule and the proposed weights lead to the same solution.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号