共查询到7条相似文献,搜索用时 0 毫秒
1.
In this paper, we present a comparative analysis of the forecasting accuracy of univariate and multivariate linear models that incorporate fundamental accounting variables (i.e., inventory, accounts receivable, and so on) with the forecast accuracy of neural network models. Unique to this study is the focus of our comparison on the multivariate models to examine whether the neural network models incorporating the fundamental accounting variables can generate more accurate forecasts of future earnings than the models assuming a linear combination of these same variables. We investigate four types of models: univariate‐linear, multivariate‐linear, univariate‐neural network, and multivariate‐neural network using a sample of 283 firms spanning 41 industries. This study shows that the application of the neural network approach incorporating fundamental accounting variables results in forecasts that are more accurate than linear forecasting models. The results also reveal limitations of the forecasting capacity of investors in the security market when compared to neural network models. 相似文献
2.
Intrusion detection systems help network administrators prepare for and deal with network security attacks. These systems collect information from a variety of systems and network sources, and analyze them for signs of intrusion and misuse. A variety of techniques have been employed for analysis ranging from traditional statistical methods to new data mining approaches. In this study the performance of three data mining methods in detecting network intrusion is examined. An experimental design (3times2x2) is created to evaluate the impact of three data mining methods, two data representation formats, and two data proportion schemes on the classification accuracy of intrusion detection systems. The results indicate that data mining methods and data proportion have a significant impact on classification accuracy. Within data mining methods, rough sets provide better accuracy, followed by neural networks and inductive learning. Balanced data proportion performs better than unbalanced data proportion. There are no major differences in performance between binary and integer data representation. 相似文献
3.
Choice models and neural networks are two approaches used in modeling selection decisions. Defining model performance as the out‐of‐sample prediction power of a model, we test two hypotheses: (i) choice models and neural network models are equal in performance, and (ii) hybrid models consisting of a combination of choice and neural network models perform better than each stand‐alone model. We perform statistical tests for two classes of linear and nonlinear hybrid models and compute the empirical integrated rank (EIR) indices to compare the overall performances of the models. We test the above hypotheses by using data for various brand and store choices for three consumer products. Extensive jackknifing and out‐of‐sample tests for four different model specifications are applied for increasing the external validity of the results. Our results show that using neural networks has a higher probability of resulting in a better performance. Our findings also indicate that hybrid models outperform stand‐alone models, in that using hybrid models guarantee overall results equal or better than the two stand‐alone models. The improvement is particularly significant in cases where neither of the two stand‐alone models is very accurate in prediction, indicating that the proposed hybrid models may capture aspects of predictive accuracy that neither stand‐alone model is capable of on their own. Our results are particularly important in brand management and customer relationship management, indicating that multiple technologies and mixture of technologies may yield more accurate and reliable outcomes than individual ones. 相似文献
4.
This paper develops a model that can be used as a decision support aid, helping manufacturers make profitable decisions in upgrading the features of a family of high‐technology products over its life cycle. The model integrates various organizations in the enterprise: product design, marketing, manufacturing, production planning, and supply chain management. Customer demand is assumed random and this uncertainty is addressed using scenario analysis. A branch‐and‐price (B&P) solution approach is devised to optimize the stochastic problem effectively. Sets of random instances are generated to evaluate the effectiveness of our solution approach in comparison with that of commercial software on the basis of run time. Computational results indicate that our approach outperforms commercial software on all of our test problems and is capable of solving practical problems in reasonable run time. We present several examples to demonstrate how managers can use our models to answer “what if” questions. 相似文献
5.
As key components of Davis's technology acceptance model (TAM), the perceived usefulness and perceived ease-of-use instruments are widely accepted among the MIS research community as tools for evaluating information system applications and predicting usage. Despite this wide acceptance, a series of incremental cross-validation studies have produced conflicting and equivocal results that do not provide guidance for researchers or practitioners who might use the TAM for decision making. Using a sample of 902 “initial exposure” responses, this research conducts: (1) a confirmatory factor analysis to assess the validity and reliability of the original instruments proposed by Davis, and (2) a multigroup invariance analysis to assess the equivalence of these instruments across subgroups based on type of application, experience with computing, and gender. In contrast to the mixed results of prior cross-validation efforts, the results of this confirmatory study provide strong support for the validity and reliability of Davis's sixitem perceived usefulness and six-item ease-of-use instruments. The multigroup invariance analysis suggests the usefulness and ease-of-use instruments have invariant true scores across most, but not all, subgroups. With notable exemptions for word processing applications and users with no prior computing experience, this research provides evidence that the item-factor loadings (true scores) are invariant across spread sheet, database, and graphic applications. The implications of the results for managerial decision making are discussed. 相似文献
6.
Scholars from different disciplines acknowledge the importance of studying new service development (NSD), which is considered a central process for sustaining a superior competitive advantage of service firms. Although extant literature provides several important insights into how NSD processes are structured and organized, there is much less evidence on what makes NSD processes successful, that is, capable of contributing to a firm's sales and profits. In other words, which are the decisions that maximize the likelihood of developing successful new services? Drawing on the emerging “service‐dominant logic” paradigm, we address this question by developing an NSD framework with three main decisional nodes: market orientation, internal process organization, and external network. Using a qualitative comparative analysis technique, we discovered combinations of alternatives that maximize likelihood of establishing a successful service innovation. Specifically, we tested our NSD framework in the context of hospitality services and found that successful NSD can be achieved through two sets of decisions. The first one includes the presence of a proactive market orientation (PMO) and a formal top‐down innovative process, but the absence of a responsive market orientation. The second one includes the presence of both responsive and PMO and an open innovation model. No single element was a sufficient condition for NSD success, though PMO was a necessary condition. Several implications for theory and decision‐making practice are discussed on the basis of our findings. 相似文献
7.
This article applies the concepts of alpha, beta, and gamma changes to test whether the implementation of a new office information system with networking capabilities changes the way organizational members conceptualize office work. The traditional approach (t-test) was used to measure alpha change and indicated little change in how effectively the respondents felt they performed eight generic office activities before implementation (T1) and nine months after implementation (T2). However, considerable change was detected between effectiveness reported at T1 and a retrospective assessment of T1 effectiveness reported at T2 (called “then” assessments). Strong change was also detected between “then” assessments and T2 effectiveness reported at T2, indicating beta change. Multiple hierarchical tests showed that most of the change was actually gamma change; the T2 and the “then” factor structures and covariances differed significantly. This study supports propositions that using computers to accomplish organizational work may be associated with different conceptualizations of work, which may create ambiguity and uncertainty if training and management policies do not respond appropriately. Finally, this study provides an expanded version of a prior solution to detecting alpha, beta, and gamma changes. 相似文献