首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 468 毫秒
1.
考虑由于信息不对称等原因导致的产能分享双方订单与实际交付不一致的情形,本文构建了由产能买方,产能卖方和第三方跟单服务商组成的产能分享供应链系统,并研究“溢短交易”和“跟单服务”对产能交付率提升发挥的作用。考虑产能卖方决策是否选择溢短交易模式,产能买方决策是否接受跟单服务,本文采用博弈论分析了产能买卖双方的占优策略以及双方的均衡策略组合。研究发现,溢短交易和跟单服务都可以降低产能分享的加入门槛,使得更多企业加入到产能分享中。当次品率相对较高,或跟单服务效果相对较好时,买卖双方倾向于接受跟单服务,采取普通交易模式;相反,当次品率相对较低,或跟单服务效果相对较差时,买卖双方倾向于采取溢短交易模式,不接受跟单服务。本文进一步给出了溢短交易和跟单服务的均衡策略及其条件,为产能买卖双方对合作对象的选择及相关决策提供了依据。  相似文献   

2.
Janet Z. Yang 《Risk analysis》2019,39(8):1708-1722
To test a possible boundary condition for the risk information seeking and processing (RISP) model, this study experimentally manipulates risk perception related to the 2014 Ebola outbreak in a nationally representative sample. Multiple‐group structural equation modeling results indicate that psychological distance was negatively related to systematic processing in the high‐risk condition. In the low‐risk condition, psychological distance was positively related to heuristic processing; negative attitude toward media coverage dampened people's need for information, which subsequently influenced information processing. Risk perception elicited more fear, which led to greater information insufficiency and more heuristic processing in the low‐risk condition. In contrast, sadness was consistently related to information processing in both conditions. Model fit statistics also show that the RISP model provides a better fit to data when risk perception is elevated. Further, this study contributes to our understanding of the role of discrete emotions in motivating information processing.  相似文献   

3.
This article presents a mathematical model for the Enterobacteriaceae count on the surface of broiler chicken during slaughter and how it may be affected by different processing technologies. The model is based on a model originally developed for Campylobacter and has been adapted for Enterobacteriaceae using a Bayesian updating approach and hitherto unpublished data gathered from German abattoirs. The slaughter process in the model consists of five stages: input, scalding, defeathering, evisceration, washing, and chilling. The impact of various processing technologies along the broiler processing line on the Enterobacteriaceae count on the carcasses’ surface has been determined from literature data. The model is implemented in the software R and equipped with a graphical user interface which allows interactively to choose among different processing technologies for each stage along the processing line. Based on the choice of processing technologies the model estimates the Enterobacteriaceae count on the surface of each broiler chicken at each stage of processing. This result is then compared to a so-called baseline model which simulates a processing line with a fixed set of processing technologies. The model calculations showed how even very effective removal of bacteria on the exterior of the carcass in a previous step will be undone by the cross-contamination with leaked feces, if feces contain high concentrations of bacteria.  相似文献   

4.
Trumbo  Craig W. 《Risk analysis》1999,19(3):391-400
The heuristic-systematic information processing model (HSM) holds that individuals will use one or both of these modes of information processing when attempting to evaluate information in order to arrive at a judgment. Systematic processing is defined by effortful scrutiny and comparison of information, whereas heuristic processing is defined by the use of cues to arrive more easily at a judgment. Antecedents to the two processing modes include information sufficiency, motivation, and self-efficacy. Structural equation modeling is used to examine competing configuration of this model and to evaluate the model as appropriate for predicting risk judgment. The model also is evaluated across three groups that vary with respect to their level of concern. These analyses are executed within a case study involving an epidemiological investigation of a suspected cancer cluster. The analysis confirms the HSM's theoretically proposed structure and shows it to be a useful vehicle for evaluating risk judgment. In the overall analysis, antecedent variables generally function as specified by theory. Systematic processing is predicted by greater motivation. Heuristic processing is predicted by information sufficiency. Self-efficacy is a significant predictor of both processing modes. And heuristic processing is shown to be associated with judgment of less risk. However, when the analysis is contrasted across three groups (those concerned about cancer, not concerned and uncertain) it is shown that the model is significantly more robust for the uncertain group. This finding may have implications for the use of the HSM in risk research specifically, and in field research generally.  相似文献   

5.
外向型加工企业是指企业的生产经营业务主要以出口加工方式为主的经营业态,其产品或服务的客户主要以国外客户为主,同时也兼营一部分国内业务。外向型加工企业在国际采购中表现出明显的倾向性。一方面积极发展合格的国内供应商。另一方面,在国内供应商尚未成熟时,对于同样的国外客户订单,企业更倾向于选择与之具有同样对外加工资质的企业进行深加工结转的贸易方式。  相似文献   

6.
The role of information processing in understanding people's responses to risk information has recently received substantial attention. One limitation of this research concerns the unavailability of a validated questionnaire of information processing. This article presents two studies in which we describe the development and validation of the Information‐Processing Questionnaire to meet that need. Study 1 describes the development and initial validation of the questionnaire. Participants were randomized to either a systematic processing or a heuristic processing condition after which they completed a manipulation check and the initial 15‐item questionnaire and again two weeks later. The questionnaire was subjected to factor reliability and validity analyses on both measurement times for purposes of cross‐validation of the results. A two‐factor solution was observed representing a systematic processing and a heuristic processing subscale. The resulting scale showed good reliability and validity, with the systematic condition scoring significantly higher on the systematic subscale and the heuristic processing condition significantly higher on the heuristic subscale. Study 2 sought to further validate the questionnaire in a field study. Results of the second study corresponded with those of Study 1 and provided further evidence of the validity of the Information‐Processing Questionnaire. The availability of this information‐processing scale will be a valuable asset for future research and may provide researchers with new research opportunities.  相似文献   

7.
本研究从信息处理理论视角,探讨复杂产品制造企业如何运用信息技术提升复杂产品制造敏捷性的课题.信息技术能否推动制造敏捷性在理论界存在两种截然不同的研究结论.本研究认为企业通过识别复杂产品制造的信息处理需求,构建网络层级与网络中心度高低不同的信息处理网络结构,并引入相应的管控方式,从而实现信息技术促进复杂产品敏捷制造的目标.信息处理网络与管控方式的相互适配可发展企业信息处理能力,它包含信息分解,信息加速与信息共享三类能力.信息处理能力是信息技术能否推动企业制造敏捷性的核心因素.本研究从动态的视角打开信息技术与复杂产品制造敏捷性的过程"黑箱",对发展信息处理理论与推动中国企业复杂产品的敏捷制造具有深远的指导意义.  相似文献   

8.
企业员工通过对信息进行加工处理得到支持企业决策的信息产品,然而由于业务本身的风险传导性,前序环节的错误会随着业务流程向后传递,在为企业创造价值的同时,也为企业带来风险。基于委托代理理论,建立企业与员工之间产出相关的多代理人薪酬激励模型,得到9种情形下的薪酬激励方案及员工的最优决策。研究表明,有效的薪酬激励方法可以促使员工选择使企业净收益最大的努力水平,从而达到降低企业风险的目的,并通过算例得到部分因素对员工及企业决策的影响。  相似文献   

9.
We study the problem of scheduling jobs on a single batch processing machine to minimize the total weighted completion time. A batch processing machine is one that can process a number of jobs simultaneously as a batch. The processing time of a batch is given by the processing time of the longest job in the batch. We present a branch and bound algorithm to obtain optimal solutions and develop lower bounds and dominance conditions. We also develop a number of heuristics and evaluate their performance through extensive computational experiments. Results show that two of the heuristics consistently generate high-quality solutions in modest CPU times.  相似文献   

10.
This research explores the antecedents and consequences of market information processing during the development process of new high-tech products. To this end, we develop and test a conceptual model for market information processing in three generic stages of the new product development (NPD) process (predevelopment, development and commercialization). In addition, we explore the relationships between market information processing, its antecedents, and product advantage and success. We test our model with responses from 166 NPD-managers in Dutch high-tech firms. The findings show that the market information processing variables are related differentially to new product outcomes, even when controlling for product advantage and product newness to the market. In addition, we found that companies can enhance market information processing for new high-tech products by influencing project priority and flexibility to new products, and by reducing interdepartmental conflict.  相似文献   

11.
Using nanotechnology as a case study, this article explores (1) how people's perceptions of benefits and risks are related to their approval of nanotechnology, (2) which information‐processing factors contribute to public risk/benefit perceptions, and (3) whether individuals’ predispositions (i.e., deference to scientific authority and ideology) may moderate the relationship between cognitive processing and risk perceptions of the technology. Results indicate that benefit perceptions positively affect public support for nanotechnology; perceptions of risk tend to be more influenced by systematic processing than by heuristic cues, whereas both heuristic and systematic processing influence benefit perceptions. People who are more liberal‐minded tend to be more affected by systematic processing when thinking about the benefits of nanotechnology than those who are more conservative. Compared to less deferent individuals, those who are more deferent to scientific authority tend to be less influenced by systematic processing when making judgments about the benefits and risks of nanotechnology. Implications are discussed.  相似文献   

12.
The design of responsive distributed database systems is a key concern for information systems managers. In high bandwidth networks latency and local processing are the most significant factors in query and update response time. Parallel processing can be used to minimize their effects, particularly if it is considered at design time. It is the judicious replication and placement of data within a network that enable parallelism to be effectively used. However, latency and parallel processing have largely been ignored in previous distributed database design approaches. We present a comprehensive approach to distributed database design that develops efficient combinations of data allocation and query processing strategies that take full advantage of parallelism. We use a genetic algorithm to enable the simultaneous optimization of data allocation and query processing strategies. We demonstrate that ignoring the effects of latency and parallelism at design time can result in the selection of unresponsive distributed database designs.  相似文献   

13.
Scholars have begun to explore the role of modes of information processing and related audience characteristics in reactions to risky situations and risk information.((11, 12, 14, 17, 18, 20))"Information processing" concerns how people attend to and consider available information: systematic processors analyze messages and situations carefully, while heuristic processors skim and use cues (e.g., opinions of trusted reference groups) for quick judgments. This article uses scenarios about a semi-hypothetical industrial facility, in particular risk comparisons being considered by its manager for inclusion in a talk to the community, to explore the impact of information processing. Information insufficiency, self-assessed capacity to understand information, and information-seeking propensities are tested for potential effects on information processing about industrial risks by people living near industry. As well as testing established models, this article explores the additional explanatory value of involvement, relevance, and ability (Earle et al., 1990) and objective knowledge. Both existing model variables and new ones have significant effects on information seeking and information processing in this case, and partly confirm earlier results. Trumbo((17,18)) found that heuristic processors saw lower risk and systematic processors higher risk from suspected cancer clusters. In this study, reporting knowledge about local industrial risks as insufficient for one's purposes and self-reported avoidance of such information both raised ratings of the facility's risk and lowered ratings of its acceptability. Neither type of information processing significantly affected risk or acceptability judgments, but both increased risk ratings and heuristic processing had more effect than systematic processing. Positive ratings of risk comparisons' clarity and meaningfulness decreased risk and increased acceptability ratings, dominated other information variables in predictive power, and exceeded risk, benefit, and trust in contribution to acceptability judgments. Despite differences across studies in designs and variables, and the embryonic development of appropriate (self-reported) measures for use in field surveys, these results confirm the potential value of further research in how information seeking and processing affect risk beliefs and reactions to risk communications.  相似文献   

14.
Doryn D. Chervin 《Risk analysis》2011,31(11):1789-1799
We investigated the risk‐information‐processing behaviors of people living at or near the poverty line. Because significant gaps in health and communication exist among high‐ and low‐income groups, increasing the information seeking and knowledge of poor individuals may help them better understand risks to their health and increase their engagement in health‐protective behaviors. Most earlier studies assessed only a single health risk selected by the researcher, whereas we listed 10 health risks and allowed the respondents to identify the one that they worried about most but took little action to prevent. Using this risk, we tested one pathway inspired by the risk information seeking and processing model to examine predictors of information insufficiency and of systematic processing and extended this pathway to include health‐protective action. A phone survey was conducted of African Americans and whites living in the southern United States with an annual income of ≤$35,000 (N= 431). The results supported the model pathway: worry partially mediated the relationship between perceived risk and information insufficiency, which, in turn, increased systematic processing. In addition, systematic processing increased health‐protective action. Compared with whites and better educated respondents, African Americans and respondents with little education had significantly higher levels of information insufficiency but higher levels of systematic processing and health‐protective action. That systematic processing and knowledge influenced health behavior suggests a potential strategy for reducing health disparities.  相似文献   

15.

We consider the problem of scheduling a set of jobs with different processing times and sizes on a single bounded parallel-batch machine with periodic maintenance. Because the machine is in batch-processing model and the capacity is fixed, several jobs can be processed simultaneously in a batch provided that the total size of the jobs in the batch doesn’t exceed the machine capacity. And the processing time of a batch is the largest processing time of the jobs contained in the batch. Meanwhile, the production of each batch is non-resumable, that is, if a batch cannot be completed processing before some maintenance, that batch needs to be processed anew once the machine returns available. Our goal is to minimize the makespan. We first consider two special cases where the jobs have the same sizes or the same processing times, both of which are strongly NP-hard. We present two different approximation algorithms for them and show that these two algorithms have the same tight worst-case ratio of 2. We then consider the general case where the jobs have the arbitrary processing times and arbitrary sizes, for which we propose a 17/5-approximation algorithm.

  相似文献   

16.
We consider the stochastic, single‐machine earliness/tardiness problem (SET), with the sequence of processing of the jobs and their due‐dates as decisions and the objective of minimizing the sum of the expected earliness and tardiness costs over all the jobs. In a recent paper, Baker ( 2014 ) shows the optimality of the Shortest‐Variance‐First (SVF) rule under the following two assumptions: (a) The processing duration of each job follows a normal distribution. (b) The earliness and tardiness cost parameters are the same for all the jobs. In this study, we consider problem SET under assumption (b). We generalize Baker's result by establishing the optimality of the SVF rule for more general distributions of the processing durations and a more general objective function. Specifically, we show that the SVF rule is optimal under the assumption of dilation ordering of the processing durations. Since convex ordering implies dilation ordering (under finite means), the SVF sequence is also optimal under convex ordering of the processing durations. We also study the effect of variability of the processing durations of the jobs on the optimal cost. An application of problem SET in surgical scheduling is discussed.  相似文献   

17.
This paper describes a methodology for production-distribution planning in a large scale commodity processing network. Based on earlier research efforts dealing with single-commodity and multi-commodity distribution system modeling and on production planning for a single-plant commodity processing facility, a mathematical programming methodology is developed for a multiplant soybean processing network. Application of the model leads to the specification of a production plan for a multi-period time horizon, while at the same time indicating the quantities of soybean meal and soybean oil to be supplied to various customers in various locales. Both sets of decisions are made under the general criterion of maximizing the net income produced by the soybean processing complex, subject to various production, inventory, capacity, supply and demand constraints. Test results from application of the model are presented and discussed.  相似文献   

18.
对同时优化电力成本和制造跨度的多目标批处理机调度问题进行了研究,设计了两种多目标蚁群算法,基于工件序的多目标蚁群算法(J-PACO,Job-based Pareto Ant Colony Optimization)和基于成批的多目标蚁群算法(B-PACO,Batch-based Pareto Ant Colony Optimization)对问题进行求解分析。由于分时电价中电价是时间的函数,因而在传统批调度进行批排序的基础上,需要进一步确定批加工时间点以测定电力成本。提出的两种蚁群算法分别将工件和批与时间线相结合进行调度对此类问题进行求解。通过仿真实验将两种算法对问题的求解进行了比较,仿真实验表明B-PACO算法通过结合FFLPT(First Fit Longest Processing Time)启发式算法先将工件成批再生成最终方案,提高了算法搜索效率,并且在衡量算法搜索非支配解数量的Q指标和衡量非支配集与Pareto边界接近程度的HV指标上,均优于J-PACO算法。  相似文献   

19.
一类新型批处理机调度问题的理论分析   总被引:1,自引:0,他引:1  
钢卷在冷轧生产中,为了改进其性能,需要在罩式炉进行退火,退火过程由加热、保温和降温三段组成,而这三段处理时间由于工艺上的要求不能归结为一个时间,这与传统批处理机调度有明显的差别.对新型批处理机的总加权完成时间最小化问题建立了非线性整数规划模型,开发了基于动态规划的启发式算法.通过理论分析,获得该算法的误差性能比为3.对于三段中的某一段板卷的处理时间相同的情况,证明了启发式算法的误差性能比是2,而且证明是紧界.对于三段中的某二段板卷的处理时间相同的情况,证明了启发式算法是最优算法.对启发式算法扩展到带有任意段的加工时间的一般情况进行了性能分析.  相似文献   

20.
Scheduling with general truncated job-dependent learning effect and resource-dependent processing times is studied on a single machine. It is assumed that the job processing time is a function of the amount of resource allocated to the job, the general job-dependent learning effect and the job-dependent control parameter. For each version of the problem that differs in terms of the objective functions and the processing time functions, the optimal resource allocation is provided. Polynomial time algorithms are also developed to find the optimal schedule of several versions of the problem.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号