首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到14条相似文献,搜索用时 15 毫秒
1.
Disconfirmation has been widely used in a number of research traditions, however there are many different operationalizations of this construct. Little research has investigated the relative effectiveness of these various methods. The research reported here examines five operationalizations of disconfirmation and their effect on satisfaction. These tests are carried out using two different comparison standards in two different settings. The results indicate some methods are better in certain situations and are inappropriate in others. Implications for both practical and theoretical research are discussed.  相似文献   

2.
Category‐management models serve to assist in the development of plans for pricing and promotions of individual brands. Techniques to solve the models can have problems of accuracy and interpretability because they are susceptible to spurious regression problems due to nonstationary time‐series data. Improperly stated nonstationary systems can reduce the accuracy of the forecasts and undermine the interpretation of the results. This is problematic because recent studies indicate that sales are often a nonstationary time‐series. Newly developed correction techniques can account for nonstationarity by incorporating error‐correction terms into the model when using a Bayesian Vector Error‐Correction Model. The benefit of using such a technique is that shocks to control variates can be separated into permanent and temporary effects and allow cointegration of series for analysis purposes. Analysis of a brand data set indicates that this is important even at the brand level. Thus, additional information is generated that allows a decision maker to examine controllable variables in terms of whether they influence sales over a short or long duration. Only products that are nonstationary in sales volume can be manipulated for long‐term profit gain, and promotions must be cointegrated with brand sales volume. The brand data set is used to explore the capabilities and interpretation of cointegration.  相似文献   

3.
4.
Econometric methods used in foreign exchange rate forecasting have produced inferior out-of-sample results compared to a random walk model. Applications of neural networks have shown mixed findings. In this paper, we investigate the potentials of neural network models by employing two cross-validation schemes. The effects of different in-sample time periods and sample sizes are examined. Out-of-sample performance evaluated with four criteria across three forecasting horizons shows that neural networks are a more robust forecasting method than the random walk model. Moreover, neural network predictions are quite accurate even when the sample size is relatively small.  相似文献   

5.
This paper provides a comparative study of machine learning techniques for two-group discrimination. Simulated data is used to examine how the different learning techniques perform with respect to certain data distribution characteristics. Both linear and nonlinear discrimination methods are considered. The data has been previously used in the comparative evaluation of a number of techniques and helps relate our findings across a range of discrimination techniques.  相似文献   

6.
Analyzing scanner data in brand management activities presents unique difficulties due to the vast quantity of the data. Time series methods that are able to handle the volume effectively often are inappropriate due to the violation of many statistical assumptions in the data characteristics. We examine scanner data sets for three brand categories and examine properties associated with many time series forecasting methods. Many violations are found with respect to linearity, normality, autocorrelation, and heteroscedasticity. With this in mind we compare the forecasting ability of neural networks that require no assumptions to two of the more robust time series techniques. Neural networks provide similar forecasts to Bayesian vector autoregression (BVAR), and both outperform generalized autoregressive conditional herteroscedasticty (GARCH) models.  相似文献   

7.
在传统的基于GA算法人工神经网络的基础上作了改进,将训练集分为两部分,在前一训练集训练后获得的网络基础上使用后一训练集进行进一步的训练获得更为优化的网络结构。针对复杂系统建模输入节点难以确定的问题,提出将其与自组织理论相结合,首先使用GMDH方法获得神经网络的初始化节点,然后使用训练好的神经网络模型进行预测。最后,将由此建立的预测模型应用于国家粮食产量预测,取得了令人满意的效果。  相似文献   

8.
Intrusion detection systems help network administrators prepare for and deal with network security attacks. These systems collect information from a variety of systems and network sources, and analyze them for signs of intrusion and misuse. A variety of techniques have been employed for analysis ranging from traditional statistical methods to new data mining approaches. In this study the performance of three data mining methods in detecting network intrusion is examined. An experimental design (3times2x2) is created to evaluate the impact of three data mining methods, two data representation formats, and two data proportion schemes on the classification accuracy of intrusion detection systems. The results indicate that data mining methods and data proportion have a significant impact on classification accuracy. Within data mining methods, rough sets provide better accuracy, followed by neural networks and inductive learning. Balanced data proportion performs better than unbalanced data proportion. There are no major differences in performance between binary and integer data representation.  相似文献   

9.
Choice models and neural networks are two approaches used in modeling selection decisions. Defining model performance as the out‐of‐sample prediction power of a model, we test two hypotheses: (i) choice models and neural network models are equal in performance, and (ii) hybrid models consisting of a combination of choice and neural network models perform better than each stand‐alone model. We perform statistical tests for two classes of linear and nonlinear hybrid models and compute the empirical integrated rank (EIR) indices to compare the overall performances of the models. We test the above hypotheses by using data for various brand and store choices for three consumer products. Extensive jackknifing and out‐of‐sample tests for four different model specifications are applied for increasing the external validity of the results. Our results show that using neural networks has a higher probability of resulting in a better performance. Our findings also indicate that hybrid models outperform stand‐alone models, in that using hybrid models guarantee overall results equal or better than the two stand‐alone models. The improvement is particularly significant in cases where neither of the two stand‐alone models is very accurate in prediction, indicating that the proposed hybrid models may capture aspects of predictive accuracy that neither stand‐alone model is capable of on their own. Our results are particularly important in brand management and customer relationship management, indicating that multiple technologies and mixture of technologies may yield more accurate and reliable outcomes than individual ones.  相似文献   

10.
In this paper we explore strategic decision making in new technology adoption by using economic analysis. We show how asymmetric information affects firms' decisions to adopt the technology. We do so in a two‐stage game‐theoretic model where the first‐stage investment results in the acquisition of a new technology that, in the second stage, may give the firm a competitive advantage in the product market. We compare two information structures under which two competing firms have asymmetric information about the future performance (i.e., postadoption costs) of the new technology. We find that equilibrium strategies under asymmetric information are quite different from those under symmetric information. Information asymmetry leads to different incentives and strategic behaviors in the technology adoption game. In contrast to conventional wisdom, our model shows that market uncertainty may actually induce firms to act more aggressively under certain conditions. We also show that having better information is not always a good thing. These results illustrate a key departure from established decision theory.  相似文献   

11.
This paper develops a model that can be used as a decision support aid, helping manufacturers make profitable decisions in upgrading the features of a family of high‐technology products over its life cycle. The model integrates various organizations in the enterprise: product design, marketing, manufacturing, production planning, and supply chain management. Customer demand is assumed random and this uncertainty is addressed using scenario analysis. A branch‐and‐price (B&P) solution approach is devised to optimize the stochastic problem effectively. Sets of random instances are generated to evaluate the effectiveness of our solution approach in comparison with that of commercial software on the basis of run time. Computational results indicate that our approach outperforms commercial software on all of our test problems and is capable of solving practical problems in reasonable run time. We present several examples to demonstrate how managers can use our models to answer “what if” questions.  相似文献   

12.
As key components of Davis's technology acceptance model (TAM), the perceived usefulness and perceived ease-of-use instruments are widely accepted among the MIS research community as tools for evaluating information system applications and predicting usage. Despite this wide acceptance, a series of incremental cross-validation studies have produced conflicting and equivocal results that do not provide guidance for researchers or practitioners who might use the TAM for decision making. Using a sample of 902 “initial exposure” responses, this research conducts: (1) a confirmatory factor analysis to assess the validity and reliability of the original instruments proposed by Davis, and (2) a multigroup invariance analysis to assess the equivalence of these instruments across subgroups based on type of application, experience with computing, and gender. In contrast to the mixed results of prior cross-validation efforts, the results of this confirmatory study provide strong support for the validity and reliability of Davis's sixitem perceived usefulness and six-item ease-of-use instruments. The multigroup invariance analysis suggests the usefulness and ease-of-use instruments have invariant true scores across most, but not all, subgroups. With notable exemptions for word processing applications and users with no prior computing experience, this research provides evidence that the item-factor loadings (true scores) are invariant across spread sheet, database, and graphic applications. The implications of the results for managerial decision making are discussed.  相似文献   

13.
Scholars from different disciplines acknowledge the importance of studying new service development (NSD), which is considered a central process for sustaining a superior competitive advantage of service firms. Although extant literature provides several important insights into how NSD processes are structured and organized, there is much less evidence on what makes NSD processes successful, that is, capable of contributing to a firm's sales and profits. In other words, which are the decisions that maximize the likelihood of developing successful new services? Drawing on the emerging “service‐dominant logic” paradigm, we address this question by developing an NSD framework with three main decisional nodes: market orientation, internal process organization, and external network. Using a qualitative comparative analysis technique, we discovered combinations of alternatives that maximize likelihood of establishing a successful service innovation. Specifically, we tested our NSD framework in the context of hospitality services and found that successful NSD can be achieved through two sets of decisions. The first one includes the presence of a proactive market orientation (PMO) and a formal top‐down innovative process, but the absence of a responsive market orientation. The second one includes the presence of both responsive and PMO and an open innovation model. No single element was a sufficient condition for NSD success, though PMO was a necessary condition. Several implications for theory and decision‐making practice are discussed on the basis of our findings.  相似文献   

14.
This article explores the theoretical underpinnings of the dissonance framework in online consumer satisfaction formation process. Specifically, we suggest that any discrepancy between pre‐ and post‐purchase service performance would help determine consumers’ evaluations of online vendors. Drawing upon cognitive dissonance theory, a conceptual model is developed and tested in two different studies (preliminary and main studies). Using data from 191 college students collected longitudinally, the preliminary study demonstrates the validity and reliability of the measurements. Using a comparative analysis, the main study then tests our conceptual model as well as various competing models, including the expectation–confirmation model, with a sample of 292 online consumers. The results in both studies support our main prediction that the service encountered in different stages establishes dissonance. Specifically, we find that dissonance explains online consumers’ satisfaction process to a substantial extent, as compared with disconfirmation under the same conditions in online retailers. This study contributes to providing an alternative yet substantial approach for expectation–confirmation theory, reflecting the overarching nature of online shopping.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号