首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 140 毫秒
1.
How should a global firm manage its network of R&D centres in an effective manner? Some have developed an Integrated Network model in which the R&D centres around the world are tightly integrated into a coherent whole. Others have adopted a Loosely-Coupled Network model in which individual “centres of excellence” are given considerable autonomy. In this paper we argue that the way individual R&D units are structured, and the way the entire network is managed, should be based not on administrative heritage, environmental turbulence or management style, but on the basis of the underlying characteristics of the firm’s knowledge assets. In particular we focus on the observability and mobility of the firm’s knowledge assets, the impact these factors have on the way individual R&D centres are structured, and how they relate to one another in the international network. The argument is supported using examples and data from Swedish firms including ABB, Alfa Laval and Ericsson.  相似文献   

2.
《Omega》2007,35(5):623-626
In this paper we study the scheduling problem in which each customer order consists of several jobs of different types, which are to be processed on m facilities. Each facility is dedicated to the processing of only one type of jobs. All jobs of an order have to be delivered to the customer at the same time. The objective is to schedule all the orders to minimize the total weighted order completion time. While the problem has been shown to be unary NP-hard, we develop a heuristics to tackle the problem and analyze its worst-case performance.  相似文献   

3.
本文借用社会心理学中的自我威胁的概念来预测顾客抱怨的原因和抱怨的行为意向。首先,本文在回顾了大量文献的基础上对顾客抱怨当前研究状况总结与归纳,指出目前存在的不足之处。其次,为了改善这些不足,本文结合相应的自我概念,从理论上深层次地分析了顾客抱怨产生的原因,解释了顾客抱怨意向的选择,并围绕顾客抱怨产生的原因与行为意向,提出一个概念模型,即基于顾客自我威胁认知的抱怨选择模型。紧接着,本文对该模型进行了验证与讨论,得出了一些关于顾客服务感知与抱怨预测的有益启示。文章的最后,指出本研究的局限以及未来研究的方向。  相似文献   

4.
David K Banner 《Omega》1985,13(1):13-18
Two thorny problems have attracted the interest of work/leisure researchers in recent years: (1) the failure of previous research to clearly isolate the relationship between work and non-work from the effects of other confounding variables, and (2) a widespread failure to distinguish between the meanings that people attribute to work (and non-work) and the forms of work (and non-work) people perform. The argument for phenomenological research is made; in this way, empirically-grounded ‘common sense’ definitions of work and leisure could be created and these definitions could be used as a solid research base to test ‘spillover’ and ‘compensatory’ theories of the work/leisure relationship. The author then demonstrates, through the development of an analytic framework for viewing the work/leisure relationship, the fact that ‘compensatory’ and ‘spillover’ hypotheses are potential alternative modes of explanation. Unless the conditions under which each might apply can be specified, one or the other hypothesis can explain a given empirical relationship between the two variables. This further supports the need for solid phenomenological research.  相似文献   

5.
As an imperative channel for fast information propagation, online social networks (OSNs) also have their defects. One of them is the information leakage, i.e., information could be spread via OSNs to the users whom we are not willing to share with. Thus the problem of constructing a circle of trust to share information with as many friends as possible without further spreading it to unwanted targets has become a challenging research topic but still remained open. Our work is the first attempt to study the Maximum Circle of Trust problem seeking to share the information with the maximum expected number of poster’s friends such that the information spread to the unwanted targets is brought to its knees. First, we consider a special and more practical case with the two-hop information propagation and a single unwanted target. In this case, we show that this problem is NP-hard, which denies the existence of an exact polynomial-time algorithm. We thus propose a Fully Polynomial-Time Approximation Scheme (FPTAS), which can not only adjust any allowable performance error bound but also run in polynomial time with both the input size and allowed error. FPTAS is the best approximation solution one can ever wish for an NP-hard problem. We next consider the number of unwanted targets is bounded and prove that there does not exist an FPTAS in this case. Instead, we design a Polynomial-Time Approximation Scheme (PTAS) in which the allowable error can also be controlled. When the number of unwanted targets are not bounded, we provide a randomized algorithm, along with the analytical theoretical bound and inapproximaibility result. Finally, we consider a general case with many hops information propagation and further show its #P-hardness and propose an effective Iterative Circle of Trust Detection (ICTD) algorithm based on a novel greedy function. An extensive experiment on various real-world OSNs has validated the effectiveness of our proposed approximation and ICTD algorithms. Such an extensive experiment also highlights several important observations on information leakage which help to sharpen the security of OSNs in the future.  相似文献   

6.
Paul A. Rubin 《决策科学》1991,22(3):519-535
Linear programming discriminant analysis (LPDA) models are designed around a variety of objective functions, each representing a different measure of separation of the training samples by the resulting discriminant function. A separation failure is defined to be the selection of an “optimal” discriminant function which incompletely separates a pair of completely separable training samples. Occurrence of a separation failure suggests that the chosen discriminant function may have an unnecessarily low classification accuracy on the actual populations involved. In this paper, a number of the LPDA models proposed for the two-group case are examined to learn which are subject to separation failure. It appears that separation failure in any model can be avoided by applying the model twice, reversing group designations.  相似文献   

7.
Group testing is a well known search problem that consists in detecting the defective members of a set of objects O by performing tests on properly chosen subsets (pools) of the given set O. In classical group testing the goal is to find all defectives by using as few tests as possible. We consider a variant of classical group testing in which one is concerned not only with minimizing the total number of tests but aims also at reducing the number of tests involving defective elements. The rationale behind this search model is that in many practical applications the devices used for the tests are subject to deterioration due to exposure to or interaction with the defective elements. In this paper we consider adaptive, non-adaptive and two-stage group testing. For all three considered scenarios, we derive upper and lower bounds on the number of “yes” responses that must be admitted by any strategy performing at most a certain number t of tests. In particular, for the adaptive case we provide an algorithm that uses a number of “yes” responses that exceeds the given lower bound by a small constant. Interestingly, this bound can be asymptotically attained also by our two-stage algorithm, which is a phenomenon analogous to the one occurring in classical group testing. For the non-adaptive scenario we give almost matching upper and lower bounds on the number of “yes” responses. In particular, we give two constructions both achieving the same asymptotic bound. An interesting feature of one of these constructions is that it is an explicit construction. The bounds for the non-adaptive and the two-stage cases follow from the bounds on the optimal sizes of new variants of d-cover free families and (pd)-cover free families introduced in this paper, which we believe may be of interest also in other contexts.  相似文献   

8.
Data envelopment analysis (DEA) evaluates the relative efficiency of a set of comparable decision making units (DMUs) with multiple performance measures (inputs and outputs). Classical DEA models rely on the assumption that each DMU can improve its performance by increasing its current output level and decreasing its current input levels. However, undesirable outputs (like wastes and pollutants) may often be produced together with desirable outputs in final products which have to be minimized. On the other hands, in some real-world situations, we may encounter some specific performance measures with more than one value which are measured by various standards. In this study, we referee such measures as multi-valued measures which only one of their values should be selected. For instance, unemployment rate is a multi-valued measure in economic applications since there are several definitions or standards to measure it. As a result, selecting a suitable value for a multi-valued measure is a challenging issue and is crucial for successful application of DEA. The aim of this study is to accommodate multi-valued measures in the presence of undesirable outputs. In doing so, we formulate two individual and summative selecting directional distance models and develop a pair of multiplier- and envelopment-based selecting approaches. Finally, we elaborate applicability of the proposed method using a real data on 183 NUTS 2 regions in 23 selected EU-28 countries.  相似文献   

9.
Let u and v be vertices of a graph G, such that the distance between u and v is two and x is a common neighbor of u and v. We define the edge lift of uv off x as the process of removing edges ux and vx while adding the edge uv to G. In this paper, we investigate the effect that edge lifting has on the total domination number of a graph. Among other results, we show that there are no trees for which every possible edge lift decreases the total domination number and that there are no trees for which every possible edge lift leaves the total domination number unchanged. Trees for which every possible edge lift increases the total domination number are characterized.  相似文献   

10.
In spite of the importance of organizational culture, scholarly advances in our understanding of the construct appear to have stagnated. We review the state of culture research and argue that the ongoing academic debates about what culture is and how to study it have resulted in a lack of unity and precision in defining and measuring culture. This ambiguity has constrained progress in both developing a coherent theory of organizational culture and accreting replicable and valid findings. To make progress we argue that future research should focus on conceptualizing and assessing organizational culture as the norms that characterize a group or organization that if widely shared and strongly held, act as a social control system to shape members’ attitudes and behaviors. We further argue that to accomplish this, researchers need to recognize that norms can be parsed into three distinct dimensions: (1) the content or what is deemed important (e.g., teamwork, accountability, innovation), (2) the consensus or how widely shared norms are held across people, and (3) the intensity of feelings about the importance of the norm (e.g., are people willing to sanction others). From this perspective we suggest how future research might be able to clarify some of the current conflicts and confusion that characterize the current state of the field.  相似文献   

11.
We study the Mean-SemiVariance Project (MSVP) portfolio selection problem, where the objective is to obtain the optimal risk-reward portfolio of non-divisible projects when the risk is measured by the semivariance of the portfolio׳s Net-Present Value (NPV) and the reward is measured by the portfolio׳s expected NPV. Similar to the well-known Mean-Variance portfolio selection problem, when integer variables are present (e.g., due to transaction costs, cardinality constraints, or asset illiquidity), the MSVP problem can be solved using Mixed-Integer Quadratic Programming (MIQP) techniques. However, conventional MIQP solvers may be unable to solve large-scale MSVP problem instances in a reasonable amount of time. In this paper, we propose two linear solution schemes to solve the MSVP problem; that is, the proposed schemes avoid the use of MIQP solvers and only require the use of Mixed-Integer Linear Programming (MILP) techniques. In particular, we show that the solution of a class of real-world MSVP problems, in which project returns are positively correlated, can be accurately approximated by solving a single MILP problem. In general, we show that the MSVP problem can be effectively solved by a sequence of MILP problems, which allow us to solve large-scale MSVP problem instances faster than using MIQP solvers. We illustrate our solution schemes by solving a real MSVP problem arising in a Latin American oil and gas company. Also, we solve instances of the MSVP problem that are constructed using data from the PSPLIB library of project scheduling problems.  相似文献   

12.
Traditionally, leadership research has focused on unidirectional questions in which leader attributes are considered to determine follower outcomes. However, many phenomena between leaders (x) and followers (y) involve a simultaneous influence process in which x affects y, and y also affects x (i.e., simultaneity). Unfortunately, this simultaneity bias creates endogeneity and is often not properly addressed in the extant leadership literature. In three studies, we demonstrate the challenges of simultaneity bias and present two methodological solutions that can help to correct problems of simultaneity bias. We focus on simultaneity that occurs between follower resistance and leader control. We mathematically demonstrate the simultaneity bias using a simulated dataset and show how this bias can be statistically solved using an instrumental variable estimation approach. Furthermore, we present how the simultaneity bias can be resolved using an experimental design. We discuss how our approach advances theory and methods for leadership research.  相似文献   

13.
《Omega》2002,30(3):185-195
In this paper, we propose a Bayesian hierarchical model based on the partial adjustment model described by Wu and Ho (Rev. Quant. Finance Acc. 9 (1997) 71). The proposed model allows us to estimate the average adjustment coefficients associated with the error correction component and with the sensitivity of the firm to exogenous factors that have an industry-wide effect. Using the proposed model, we analyse the financial ratios calculated by The Bank of Spains Central Balance Sheet Office (CBSO) corresponding to the Spanish manufacturing sector during the period 1986–1997. In almost all the ratios analysed, we find that the error correction component exerts a greater influence, with the Interest Expense to Liabilities ratio demonstrating a greater sensitivity to this effect; by contrast, factors endogenous to the firm have more influence over the Indebtedness ratio. When considered by sectors, we find that it is the Transport sector which enjoys the greatest capacity for manoeuvre in the Profitability and Indebtedness ratios.  相似文献   

14.
The popular matching problem introduced by Abraham, Irving, Kavitha, and Mehlhorn is a matching problem in which there exist applicants and posts, and applicants have preference lists over posts. A matching M is said to be popular, if there exists no other matching N such that the number of applicants that prefer N to M is larger than the number of applicants that prefer M to N. The goal of this problem is to decide whether there exists a popular matching, and find a popular matching if one exists. In this paper, we first consider a matroid generalization of the popular matching problem with strict preference lists, and give a polynomial-time algorithm for this problem. In the second half of this paper, we consider the problem of transforming a given instance of a matroid generalization of the popular matching problem with strict preference lists by deleting a minimum number of applicants so that it has a popular matching. This problem is a matroid generalization of the popular condensation problem with strict preference lists introduced by Wu, Lin, Wang, and Chao. By using the results in the first half, we give a polynomial-time algorithm for this problem.  相似文献   

15.
Scour (localized erosion by water) is an important risk to bridges, and hence many infrastructure networks, around the world. In Britain, scour has caused the failure of railway bridges crossing rivers in more than 50 flood events. These events have been investigated in detail, providing a data set with which we develop and test a model to quantify scour risk. The risk analysis is formulated in terms of a generic, transferrable infrastructure network risk model. For some bridge failures, the severity of the causative flood was recorded or can be reconstructed. These data are combined with the background failure rate, and records of bridges that have not failed, to construct fragility curves that quantify the failure probability conditional on the severity of a flood event. The fragility curves generated are to some extent sensitive to the way in which these data are incorporated into the statistical analysis. The new fragility analysis is tested using flood events simulated from a spatial joint probability model for extreme river flows for all river gauging sites in Britain. The combined models appear robust in comparison with historical observations of the expected number of bridge failures in a flood event. The analysis is used to estimate the probability of single or multiple bridge failures in Britain's rail network. Combined with a model for passenger journey disruption in the event of bridge failure, we calculate a system‐wide estimate for the risk of scour failures in terms of passenger journey disruptions and associated economic costs.  相似文献   

16.
In this paper we consider the two-stage stochastic mixed-integer linear programming problem with recourse, which we call the RP problem. A common way to approximate the RP problem, which is usually formulated in terms of scenarios, is to formulate the so-called Expected Value (EV) problem, which only considers the expectation of the random parameters of the RP problem. In this paper we introduce the Conditional Scenario (CS) problem which represents a midpoint between the RP and the EV problems regarding computational tractability and ability to deal with uncertainty. In the theoretical section we have analyzed some useful bounds related to the RP, EV and CS problems. In the numerical example here presented, the CS problem has outperformed both the EV problem in terms of solution quality, and the RP problem with the same number of scenarios as in the CS problem, in terms of solution time.  相似文献   

17.
In this article the understanding of the scientific functions of business taxation represented by Jochen Hundsdoerfer, Dirk Kiesewetter and Caren Sureth is critically analyzed. It is argued that hypotheses on the influence of taxes on decisions, as far as they are based on neoclassical models, are not applicable to explain the actions of tax payers. These arguments at the same time object the realization of a neutral tax system. They further contradict the realization of a tax system, which is supposed to have an impact or which avoids taxes to have an effect on decisions, provided that the criticized hypotheses are used as a basis. Fiscal law standards should rather fulfil the principle of equability of taxation. It is supposed that such fiscal law standards have an effect on decisions of tax payers, which are contradictory to the aim of an equable taxation. Therefore hypotheses from scientific experience of the actual effect of taxes on decisions must be taken into account. Hence the object of this analysis is to investigate the real influence of taxes on decisions detached from neo-classical models.  相似文献   

18.
The max-coloring problem is to compute a legal coloring of the vertices of a graph G=(V,E) with vertex weights w such that $\sum_{i=1}^{k}\max_{v\in C_{i}}w(v_{i})$ is minimized, where C 1,??,C k are the various color classes. For general graphs, max-coloring is as hard as the classical vertex coloring problem, a special case of the former where vertices have unit weight. In fact, in some cases it can even be harder: for example, no polynomial time algorithm is known for max-coloring trees. In this paper we consider the problem of max-coloring paths and its generalization, max-coloring skinny trees, a broad class of trees that includes paths and spiders. For these graphs, we show that max-coloring can be solved in time O(|V|+time for sorting the vertex weights). When vertex weights are real numbers, we show a matching lower bound of ??(|V|log?|V|) in the algebraic computation tree model.  相似文献   

19.
Cognitive Radio Networks (CRNs) have paved a road for Secondary Users (SUs) to opportunistically exploit unused spectrum without harming the communications among Primary Users (PUs). In this paper, practical unicast and convergecast schemes, which are overlooked by most of the existing works for CRNs, are proposed. We first construct a cell-based virtual backbone for CRNs. Then prove that SUs have positive probabilities to access the spectrum and the expected one hop delay is bounded by a constant, if the density of PUs is finite. According to this fact, we proposed a three-step unicast scheme and a two-phase convergecast scheme. We demonstrate that the induced delay from our proposed Unicast Scheduling (US) algorithm scales linearly with the transmission distance between the source and the destination. Furthermore, the expected delay of the proposed Convergecast Scheduling (CS) algorithm is proven to be upper bounded by $O(\log n + \sqrt{n/\log n})$ . To the best of our knowledge, this is the first study of convergecast in CRNs. Finally, the performance of the proposed algorithms is validated through simulations.  相似文献   

20.
Organizational leaders and scholars have long regarded social sexual behavior in the workplace as deviant, harassing in nature, and something that organizations must eliminate to ensure maximal performance. Regardless of this perspective, however, social sexual behavior is an inescapable feature of human interaction that cannot be completely controlled in organizations. Moreover, there are many aspects of social sexual behavior that have not been considered or granted enough research attention to entirely warrant the broad assumption that social sexual behavior is always problematic to organizations and individuals. In the current paper, we highlight these under-researched or ignored facets of social sexual behavior. First, we consider the potential buffering effects that consensual social sexual behavior at work can offer to those involved, in terms of protecting them from the negative impact of workplace stressors. Next, we discuss the ways in which social sexual behavior is used as a tool of social influence at work. Finally, we consider the role of social sexual behavior at work as a precursor to the development of romantic relationships among employees. Throughout this discussion, we highlight both the potential benefits and drawbacks of engaging in social sexual behavior at work rather than adopting the perspective that all social sexual behavior at work is harmful. We encourage future research to consider all angles when investigating social sexual behavior at work, so as not to be completely detached from the reality that social sexual behavior can be consensual and sometimes enjoyed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号