首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到11条相似文献,搜索用时 20 毫秒
1.
Intrusion detection systems help network administrators prepare for and deal with network security attacks. These systems collect information from a variety of systems and network sources, and analyze them for signs of intrusion and misuse. A variety of techniques have been employed for analysis ranging from traditional statistical methods to new data mining approaches. In this study the performance of three data mining methods in detecting network intrusion is examined. An experimental design (3times2x2) is created to evaluate the impact of three data mining methods, two data representation formats, and two data proportion schemes on the classification accuracy of intrusion detection systems. The results indicate that data mining methods and data proportion have a significant impact on classification accuracy. Within data mining methods, rough sets provide better accuracy, followed by neural networks and inductive learning. Balanced data proportion performs better than unbalanced data proportion. There are no major differences in performance between binary and integer data representation.  相似文献   

2.
The design of responsive distributed database systems is a key concern for information systems managers. In high bandwidth networks latency and local processing are the most significant factors in query and update response time. Parallel processing can be used to minimize their effects, particularly if it is considered at design time. It is the judicious replication and placement of data within a network that enable parallelism to be effectively used. However, latency and parallel processing have largely been ignored in previous distributed database design approaches. We present a comprehensive approach to distributed database design that develops efficient combinations of data allocation and query processing strategies that take full advantage of parallelism. We use a genetic algorithm to enable the simultaneous optimization of data allocation and query processing strategies. We demonstrate that ignoring the effects of latency and parallelism at design time can result in the selection of unresponsive distributed database designs.  相似文献   

3.
DHL, an international air‐express courier, has been operating in Hong Kong for many years. In 1998, the new international airport located at a site considerably distant from the old location opened in Hong Kong (HK). Other airport‐related infrastructure facilities have also been developed or are being developed, resulting in major changes in transport structure as well as a shift in customer demand. In this paper a multiyear distribution network is designed for DHL(HK) using an integrated network design methodology, which consists of a macro model and a micro model. The macro model, a mixed 0–1 LP, determines in an aggregate manner the least‐cost distribution network. The micro model, a simulation, evaluates the operational viability and efficacy of the network according to its service coverage and service reliability. We also illustrate how coverage and reliability can be improved via the integrated use of the two models. Extensive discussion on relevant planning and operational issues of an air‐express courier are included. The methodology has been successfully implemented at DHL(HK). It has been used to design the network, to test strategic decisions, and to update the network.  相似文献   

4.
《Risk analysis》2018,38(5):978-990
Although relatively rare, surgical instrument retention inside a patient following central venous catheterization still presents a significant risk. The research presented here compared two approaches to help reduce retention risk: Bow‐Tie Analysis and Systems‐Theoretic Accident Model and Processes. Each method was undertaken separately and then the results of the two approaches were compared and combined. Both approaches produced beneficial results that added to existing domain knowledge, and a combination of the two methods was found to be beneficial. For example, the Bow‐Tie Analysis gave an overview of which activities keep controls working and who is responsible for each control, and the Systems‐Theoretic Accident Model and Processes revealed the safety constraints that were not enforced by the supervisor of the controlled process. Such two‐way feedback between both methods is potentially helpful for improving patient safety. Further methodology ideas to minimize surgical instrument retention risks are also described.  相似文献   

5.
Customer service is a key component of a firm's value proposition and a fundamental driver of differentiation and competitive advantage in nearly every industry. Moreover, the relentless coevolution of service opportunities with novel and more powerful information technologies has made this area exciting for academic researchers who can contribute to shaping the design and management of future customer service systems. We engage in interdisciplinary research—across information systems, marketing, and computer science—in order to contribute to the service design and service management literature. Grounded in the design‐science perspective, our study leverages marketing theory on the service‐dominant logic and recent findings pertaining to the evolution of customer service systems. Our theorizing culminates with the articulation of four design principles. These design principles underlie the emerging class of customer service systems that, we believe, will enable firms to better compete in an environment characterized by an increase in customer centricity and in customers' ability to self‐serve and dynamically assemble the components of solutions that fit their needs. In this environment, customers retain control over their transactional data, as well as the timing and mode of their interactions with firms, as they increasingly gravitate toward integrated complete customer solutions rather than single products or services. Guided by these design principles, we iterated through, and evaluated, two instantiations of the class of systems we propose, before outlining implications and directions for further cross‐disciplinary scholarly research.  相似文献   

6.
Failure modes and effects analysis (FMEA) is a methodology for prioritizing actions to mitigate the effects of failures in products and processes. Although originally used by product designers, FMEA is currently more widely used in industry in Six Sigma quality improvement efforts. Two prominent criticisms of the traditional application of FMEA are that the risk priority number (RPN) used to rank failure modes is an invalid measure according to measurement theory, and that the RPN does not weight the three decision criteria used in FMEA. Various methods have been proposed to mitigate these concerns, including many using fuzzy logic. We develop a new ranking method in this article using a data‐elicitation technique. Furthermore, we develop an efficient means of eliciting data to reduce the effort associated with the new method. Subsequently, we conduct an experimental study to evaluate that proposed method against the traditional method using RPN and against an approach using fuzzy logic.  相似文献   

7.
Model uncertainty is a primary source of uncertainty in the assessment of the performance of repositories for the disposal of nuclear wastes, due to the complexity of the system and the large spatial and temporal scales involved. This work considers multiple assumptions on the system behavior and corresponding alternative plausible modeling hypotheses. To characterize the uncertainty in the correctness of the different hypotheses, the opinions of different experts are treated probabilistically or, in alternative, by the belief and plausibility functions of the Dempster‐Shafer theory. A comparison is made with reference to a flow model for the evaluation of the hydraulic head distributions present at a radioactive waste repository site. Three experts are assumed available for the evaluation of the uncertainties associated with the hydrogeological properties of the repository and the groundwater flow mechanisms.  相似文献   

8.
Although distributed teams have been researched extensively in information systems and decision science disciplines, a review of the literature suggests that the dominant focus has been on understanding the factors affecting performance at the team level. There has however been an increasing recognition that specific individuals within such teams are often critical to the team's performance. Consequently, existing knowledge about such teams may be enhanced by examining the factors that affect the performance of individual team members. This study attempts to address this need by identifying individuals who emerge as “stars” in globally distributed teams involved in knowledge work such as information systems development (ISD). Specifically, the study takes a knowledge‐centered view in explaining which factors lead to “stardom” in such teams. Further, it adopts a social network approach consistent with the core principles of structural/relational analysis in developing and empirically validating the research model. Data from U.S.–Scandinavia self‐managed “hybrid” teams engaged in systems development were used to deductively test the proposed model. The overall study has several implications for group decision making: (i) the study focuses on stars within distributed teams, who play an important role in shaping group decision making, and emerge as a result of a negotiated/consensual decision making within egalitarian teams; (ii) an examination of emergent stars from the team members’ point of view reflects the collective acceptance and support dimension decision‐making contexts identified in prior literature; (iii) finally, the study suggests that the social network analysis technique using relational data can be a tool for a democratic decision‐making technique within groups.  相似文献   

9.
Drawing on the resource‐based view, we propose a configurational perspective of how information technology (IT) assets and capabilities affect firm performance. Our premise is that IT assets and IT managerial capabilities are components in organizational design, and as such, their impact can only be understood by taking into consideration the interactions between those IT assets and capabilities and other non‐IT components. We develop and test a model that assesses the impact of explicit and tacit IT resources by examining their interactions with two non‐IT resources (open communication and business work practices). Our analysis of data collected from a sample of firms in the third‐party logistics industry supports the proposed configurational perspective, showing that IT resources can either enhance (complement) or suppress (by substituting for) the effects of non‐IT resources on process performance. More specifically, we find evidence of complementarities between shared business–IT knowledge and business work practice and between the scope of IT applications and an open communication culture in affecting the performance of the customer‐service process; but there is evidence of substitutability between shared knowledge and open communications. For decision making, our results reinforce the need to account for all dimensions of possible interaction between IT and non‐IT resources when evaluating IT investments.  相似文献   

10.
Over the past decade, organizations have made significant investments in enterprise resource planning (ERP) systems. The realization of benefits from these investments depends on supporting effective use of information technology (IT) and satisfying IT users. User satisfaction with information systems is one of the most important determinants of the success of those systems. Drawing upon a sample of 407 end users of ERP systems and working within the framework of confirmatory factor analysis (CFA), this study examines the structure and dimensionality, and reliability and validity of the end‐user computing satisfaction (EUCS) instrument posited by Doll and Torkzadeh (1988) . In response to Klenke's (1992) motion to cross‐validate management information system (MIS) instruments and to retest the end user computing satisfaction instrument using new data, this study's results, consistent with previous findings, confirm that the EUCS instrument maintains its psychometric stability when applied to users of enterprise resource planning application software. Implications of these results for practice and research are provided.  相似文献   

11.
This study uses fully factorial computer simulation to identify referral network attributes and referral decision rules that streamline the routing of people to urgent, limited services. As an example of a scenario, the model represents vaccine delivery in a city of 100,000 people during the first 30 days of a pandemic. By modeling patterns of communication among health care providers and daily routing of overflow clients to affiliated organizations, the simulations determine cumulative effects of referral network designs and decision rules on citywide delivery of available vaccines. Referral networks generally improve delivery rates when compared with random local search by clients. Increasing the health care organizations’ tendencies to form referral partnerships from zero to about four partners per organization sharply increases vaccine delivery under most conditions, but further increases in partnering yield little or no gain in system performance. When making referrals, probabilistic selection among partner organizations that have any capacity to deliver vaccines is more effective than selection of the highest‐capacity partner, except when tendencies to form partnerships are very low. Implications for designing health and human service referral networks and helping practitioners optimize their use of the networks are discussed. Suggestions for using simulations to model comparable systems are provided.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号