首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
The perceived usefulness of information is an important construct for the design of management information systems. Yet an examination of existing measures of perceived usefulness shows that the instruments developed have not been validated nor has their reliability been verified. In this paper a new instrument for measuring two dimensions of perceived usefulness is developed. The results of an empirical study designed to test the reliability and construct validity of this instrument in a capital-budgeting setting are presented.  相似文献   

2.
Despite the increased application of cluster analysis in decision sciences, few attempts have been made to derive hypothesis-testing procedures for the evaluation of clustering solutions. In fact, the present paper shows that at least one such attempt failed to specify a meaningful sampling distribution for the test procedure. An alternative index based on the concept of point-biserial correlation is proposed as a possible recovery measure. The index is subsequently used to form the basis of a valid statistical test for the existence of cluster structure.  相似文献   

3.
Certain business practices include legal but ethically questionable activities. Surveys intended to determine the nature and extent of such activities must employ questioning methods which mitigate the inherent threat of sensitive questions and account for social desirability effects. This study uses a national mail survey of chief executive officers (CEOs) of manufacturing firms to compare the performance of direct questioning, scenario, and randomized response methods for estimating the prevalence of several sensitive business practices. The direct questioning and scenario versions used self-reporting (individual-based) questions, as well as the CEO's perceptions of the extent to which others engage in questionable activities (other-based). In general, the estimates of the prevalence of selected questionable activities were lowest when the individual-based direct questioning was used and highest when other-based (either direct questioning or scenario) methods were used. The individual- based scenario and randomized response estimates represented intermediate estimates. Suggested guidelines for using the three methods for eliciting sensitive information are discussed.  相似文献   

4.
Discriminant analysis is relevant to business decision making in a variety of contexts, such as when one decides to make or buy a specified component, fund a venture project, or hire a particular person. Potential applications in artificial intelligence, particularly in the area of pattern recognition, have further underscored the importance of the field. A recent innovation in discriminant analysis is provided by special linear programming (LP) models, which offer attractive alternatives to classical statistical approaches. The scope of application in which discriminant analysis can be advantageously employed is broadened by the flexibility to tailor parameters in the LP approaches to reflect diverse goals and by the power to explore the sensitivity of these parameters. In spite of the promise of the LP formulations, however, limitations to their effectiveness have been uncovered in certain settings. A recent advance involving a normalization construct removes some of the limitations but entails solving the LP model twice (to allow for different signs of a normalization constant) and does not yield equivalent solutions for different rotations of the problem data. This paper introduces a new model and a new class of normalizations that remedy both remaining limitations, making it possible to take advantage of the modeling capabilities of the LP formulations without the attendant shortcomings encountered by earlier investigations. Our development shows by empirical testing and illustrative analysis that the quality of solutions from LP discriminant approaches is more favorable (relative to the classical model) than previously supposed.  相似文献   

5.
This paper presents point and interval estimators of both long-run and single-period target quantities in a simple cost-volume-profit (C-V-P) model. This model is a stochastic version of the “accountant's break-even chart” where the major component is a semivariable cost function. Although these features suggest obvious possibilities for practical application, a major purpose of this paper is to examine the statistical properties of target quantity estimators in C-V-P analysis. It is shown that point estimators of target quantity are biased and possess no moments of positive order, but are consistent. These properties are also shared by previous break-even models, even when all parameters are assumed known with certainty. After a test for positive variable margins, Fieller's [6] method is used to obtain interval estimators of relevant target quantities. This procedure therefore minimizes possible ambiguities in stochastic break-even analysis (noted by Ekern [3]).  相似文献   

6.
An approach to analyzing experimental data with multiple criteria is explained and demonstrated on data from a test of the effectiveness of two posters. As a supplement to traditional multivariate analysis of variance and covariance, the application of a step-down F test is advocated when an ordering of the criterion is meaningful, and an analysis of contrasts is recommended when such an ordering is not managerially relevant. The step-down procedure has the advantage of simultaneously testing an overall hypothesis and hypotheses on each criterion variable.  相似文献   

7.
The application of optimization techniques in digital simulation experiments is frequently complicated by the presence of large experimental error variances. Two of the more widely accepted design strategies for the resolution of this problem include the assignment of common pseudorandom number streams and the assignment of antithetic pseudorandom number streams to the experimental points. When considered separately, however, each of these variance-reduction procedures has rather restrictive limitations. This paper examines the simultaneous use of these two techniques as a variance-reduction strategy in response surface methodology (RSM) analysis of simulation models. A simulation of an inventory system is used to illustrate the application and benefits of this assignment procedure, as well as the basic components of an RSM analysis.  相似文献   

8.
This article presents an efficient way of dealing with adaptive expectations models—a way that makes use of all the information available in the data. The procedure is based on multiple-input transfer functions (MITFs): by calculating lead and lag cross correlations between innovations associated with the variables in the model, it is possible to determine which periods have the greatest effects on the dependent variable. If information about k periods ahead is required, fitted values for the expectation variables are used to generate k-period-ahead forecasts. These in turn can be used in the estimation of the transfer function equation, which not only contains the usual lagged variables but also allows for incorporation of lead-fitted values for the expectation variables. The MITF identification and estimation procedures used are based on the corner method. The method is contrasted with the Almon distributed-lag approach using a model relating stock market prices to interest rates and expected corporate profits.  相似文献   

9.
A logit model approach designed to assign a probability whether prospective jurors will favor the defendant or plaintiff in a case, as a function of perceived individual juror characteristics, is described in the context of a case situation. The model, which has been programmed on a hand-held computer, is designed for implementation in courtroom settings to help defense attorneys evaluate and select jurors in order to minimize the likelihood of large jury awards. Empirical tests of the model are also described.  相似文献   

10.
The bootstrap method is used to compute the standard error of regression parameters when the data are non-Gaussian distributed. Simulation results with L1 and L2 norms for various degrees of “non-Gaussianess” are provided. The computationally efficient L2 norm, based on the bootstrap method, provides a good approximation to the L1 norm. The methodology is illustrated with daily security return data. The results show that decisions can be reversed when the ordinary least-squares estimate of standard errors is used with non-Gaussian data.  相似文献   

11.
This paper presents a minimum-cost methodology for determining a statistical sampling plan in substantive audit tests. In this model, the auditor specifies β, the risk of accepting an account balance as correct when it is not, according to audit evidence requirements. Using β as a constraint, the auditor then selects a sampling plan to optimize the trade-off between sampling costs and the costs of follow-up audit procedures. Tables to aid in this process and an illustration are provided.  相似文献   

12.
We look at a specific but pervasive problem in the use of secondary or published data in which the data are summarized in a histogram format, perhaps with additional mean or median information provided; two published sources yield histogram-type summaries involving the same variable, but the two sources do not group the values of the variable the same way; the researcher wishes to answer a question using information from both data streams; and the original, detailed data underlying the published summary, which could give a better answer to the question, are unavailable. We review relevant aspects of maximum-entropy (ME) estimation, and develop a heuristic for generating ME density estimates from data in histogram form when additional means and medians may be known. Application examples from several business and scientific areas illustrate the heuristic's use. Areas of application include business and social or market research, risk analysis, and individual risk profile analysis. Some instructional or classroom applications are possible as well.  相似文献   

13.
The matched-pairs methodology is becoming increasingly popular as a means of controlling extraneous factors in business research. This paper develops discriminant procedures for matched data and examines the properties of these methods. Data from a recent study by Hunt [14] on the determinants of inventory method choice are used to contrast the performance of the different methods. While all of the methods yield the same set of discriminating variables, those procedures that allow for the dependence among observations within a pair provide greater classificatory power than traditional multivariate techniques.  相似文献   

14.
Application of the geometric mean to holding-period returns is discussed from a statistical theory standpoint. The population geometric mean is considered a parameter of the probability distribution of returns its relationship to moments of the distribution is discussed. The sample geometric mean and its relation to sample moments is assessed through its sampling distribution it is viewed as an estimator of the population geometric mean. For application to long-term investment where a geometric mean is maximized, the distributional properties of the geometric mean should be used. The terms statistic, approximation, and parameter are differentiated.  相似文献   

15.
In the previous paper, Cooley and Houck [1] examined the simultaneous use of common and antithetic random number streams as a variance-reduction strategy for simulation studies employing response surface methodology (RSM). Our paper supplements their work and further explores pseudorandom number assignments in response surface designs. Specifically, an alternative strategy for assigning pseudorandom numbers is proposed; this strategy is more efficient than that given by Cooley and Houck, especially when more than two factors are involved.  相似文献   

16.
Constrained utility maximization underlies much consumer behavior in economics. Opportunities for solving important problems are ever present. However, most potential applications remain potential because existing software packages are not able to estimate the systems of equations necessary to identify the utility function. At least three features often conspire to make these problems intractable: the size of the system of equations that must be estimated, the lack of any theory for imposing zero restrictions on many of the parameters of the nonadditive utility functions, and the necessity to insure negative definiteness consistent with the axioms of consumer behavior. This paper develops an approach and illustrates the stepwise least squares estimator of a group of equations by application to the demand for food items.  相似文献   

17.
This paper presents a new linear model methodology for clustering judges with homogeneous decision policies and differentiating dimensions which distinguish judgment policies. This linear policy capturing model based on canonical correlation analysis is compared to the standard model based on regression analysis and hierarchical agglomerative clustering. Potential advantages of the new methodology include simultaneous instead of sequential consideration of information in the dependent and independent variable sets, decreased interpretational difficulty in the presence of multicollinearity and/or suppressor/moderator variables, and a more clearly defined solution structure allowing assessment of a judge's relationship to all of the derived, ideal policy types. An application to capturing policies of information systems recruiters responsible for hiring entry-level personnel is used to compare and contrast the two techniques.  相似文献   

18.
Forecasters typically select a statistical forecasting model from among a set of alternative models. Subsequently, forecasts are generated with the chosen model and reported to management (forecast consumers) as if specification uncertainty did not exist (i.e., as if the chosen model were the “true” model of the forecast variable). In this note, a well-known Bayesian model-comparison procedure is used to illustrate some of the ambiguities and distortions of forecasts that do not reflect specification uncertainty. It is shown that a single selected forecasting model (however chosen) will generally misstate measures of forecast risk and lead to point and interval forecasts that are misplaced from a decision-theoretic point of view.  相似文献   

19.
Industrial robots are increasingly used by many manufacturing firms. The number of robot manufacturers has also increased with many of these firms now offering a wide range of models. A potential user is thus faced with many options in both performance and cost. This paper proposes a decision model for the robot selection problem. The proposed model uses robust regression to identify, based on manufacturers' specifications, the robots that are the better performers for a given cost. Robust regression is used because it identifies and is resistant to the effects of outlying observations, key components in the proposed model. The robots selected by the model become candidates for testing to verify manufacturers' specifications. The model is tested on a real data set and an example is presented.  相似文献   

20.
Small business loan applications have not been evaluated successfully by traditional methods. This paper explores the possibility of using three types of nonfinancial ratio variables (owner, firm, and loan characteristics) to predict whether a small business will pay off or default its loan. The owner and loan variables were better predictors of loan success than the firm variables.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号