首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Dose–response modeling of biological agents has traditionally focused on describing laboratory‐derived experimental data. Limited consideration has been given to understanding those factors that are controlled in a laboratory, but are likely to occur in real‐world scenarios. In this study, a probabilistic framework is developed that extends Brookmeyer's competing‐risks dose–response model to allow for variation in factors such as dose‐dispersion, dose‐deposition, and other within‐host parameters. With data sets drawn from dose–response experiments of inhalational anthrax, plague, and tularemia, we illustrate how for certain cases, there is the potential for overestimation of infection numbers arising from models that consider only the experimental data in isolation.  相似文献   

2.
Each agent in a finite set requests an integer quantity of an idiosyncratic good; the resulting total cost must be shared among the participating agents. The Aumann–Shapley prices are given by the Shapley value of the game where each unit of each good is regarded as a distinct player. The Aumann–Shapley cost‐sharing method charges to an agent the sum of the prices attached to the units she consumes. We show that this method is characterized by the two standard axioms of Additivity and Dummy, and the property of No Merging or Splitting: agents never find it profitable to split or to merge their consumptions. We offer a variant of this result using the No Reshuffling condition: the total cost share paid by a group of agents who consume perfectly substitutable goods depends only on their aggregate consumption. We extend this characterization to the case where agents are allowed to consume bundles of goods.  相似文献   

3.
This paper presents a critical review of research in end-user information system satisfaction (EUISS). An extensive literature search is conducted from which over 50 EUISS related papers are identified. It is found that the past research is dominated by the expectation disconfirmation approach. To provide more insights into the psychological processing of the information system performance construct and its impact upon EUISS, we propose an integrated conceptual model based on the equity and needs theories. The implications of the proposed model for EUISS are discussed, and suggestions are made for testing the model.  相似文献   

4.
Sources for human hepatitis E virus (HEV) infections of genotype 3 are largely unknown. Pigs are potential animal reservoirs for HEV. Intervention at pig farms may be desired when pigs are confirmed as a source for human infections, requiring knowledge about transmission routes. These routes are currently understudied. The current study aims to quantify the likelihood of pig feces in causing new HEV infections in pigs due to oral ingestion. We estimated the daily infection risk for pigs by modeling the fate of HEV in the fecal–oral (F–O) pathway. Using parameter values deemed most plausible by the authors based on current knowledge the daily risk of infection was 0.85 (95% interval: 0.03–1). The associated expected number of new infections per day was ~4 (2.5% limit 0.1, the 97% limit tending to infinity) compared to 0.7 observed in a transmission experiment with pigs, and the likelihood of feces causing the transmission approached 1. In alternative scenarios, F–O transmission of HEV was also very likely to cause new infections. By reducing the total value of all explanatory variables by 2 orders of magnitude, the expected numbers of newly infected pigs approached the observed number. The likelihood of F–O transmission decreased by decreasing parameter values, allowing for at most 94% of infections being caused by additional transmission routes. Nevertheless, in all scenarios F–O transmission was estimated to contribute to HEV transmission. Thus, despite the difficulty in infecting pigs with HEV via oral inoculation, the F–O route is likely to cause HEV transmission among pigs.  相似文献   

5.
In this paper, we propose a simple bias–reduced log–periodogram regression estimator, ^dr, of the long–memory parameter, d, that eliminates the first– and higher–order biases of the Geweke and Porter–Hudak (1983) (GPH) estimator. The bias–reduced estimator is the same as the GPH estimator except that one includes frequencies to the power 2k for k=1,…,r, for some positive integer r, as additional regressors in the pseudo–regression model that yields the GPH estimator. The reduction in bias is obtained using assumptions on the spectrum only in a neighborhood of the zero frequency. Following the work of Robinson (1995b) and Hurvich, Deo, and Brodsky (1998), we establish the asymptotic bias, variance, and mean–squared error (MSE) of ^dr, determine the asymptotic MSE optimal choice of the number of frequencies, m, to include in the regression, and establish the asymptotic normality of ^dr. These results show that the bias of ^dr goes to zero at a faster rate than that of the GPH estimator when the normalized spectrum at zero is sufficiently smooth, but that its variance only is increased by a multiplicative constant. We show that the bias–reduced estimator ^dr attains the optimal rate of convergence for a class of spectral densities that includes those that are smooth of order s≥1 at zero when r≥(s−2)/2 and m is chosen appropriately. For s>2, the GPH estimator does not attain this rate. The proof uses results of Giraitis, Robinson, and Samarov (1997). We specify a data–dependent plug–in method for selecting the number of frequencies m to minimize asymptotic MSE for a given value of r. Some Monte Carlo simulation results for stationary Gaussian ARFIMA (1, d, 1) and (2, d, 0) models show that the bias–reduced estimators perform well relative to the standard log–periodogram regression estimator.  相似文献   

6.
We provide a framework for integration of high–frequency intraday data into the measurement, modeling, and forecasting of daily and lower frequency return volatilities and return distributions. Building on the theory of continuous–time arbitrage–free price processes and the theory of quadratic variation, we develop formal links between realized volatility and the conditional covariance matrix. Next, using continuously recorded observations for the Deutschemark/Dollar and Yen/Dollar spot exchange rates, we find that forecasts from a simple long–memory Gaussian vector autoregression for the logarithmic daily realized volatilities perform admirably. Moreover, the vector autoregressive volatility forecast, coupled with a parametric lognormal–normal mixture distribution produces well–calibrated density forecasts of future returns, and correspondingly accurate quantile predictions. Our results hold promise for practical modeling and forecasting of the large covariance matrices relevant in asset pricing, asset allocation, and financial risk management applications.  相似文献   

7.
A decision maker is asked to express her beliefs by assigning probabilities to certain possible states. We focus on the relationship between her database and her beliefs. We show that if beliefs given a union of two databases are a convex combination of beliefs given each of the databases, the belief formation process follows a simple formula: beliefs are a similarity‐weighted average of the beliefs induced by each past case.  相似文献   

8.
Price–volume agreements are commonly negotiated between drug manufacturers and third‐party payers for drugs. In one form a drug manufacturer pays a rebate to the payer on a portion of sales in excess of a specified threshold. We examine the optimal design of such an agreement under complete and asymmetric information about demand. We consider two types of uncertainty: information asymmetry, defined as the payer's uncertainty about mean demand; and market uncertainty, defined as both parties' uncertainty about true demand. We investigate the optimal contract design in the presence of asymmetric information. We find that an incentive compatible contract always exists; that the optimal price is decreasing in expected market size, while the rebate may be increasing or decreasing in expected market size; that the optimal contract for a manufacturer with the highest possible demand would include no rebate; and, in a special case, if the average reservation profit is non‐decreasing in expected market size, then the optimal contract includes no rebates for all manufacturers. Our analysis suggests that price–volume agreements with a rebate rate of 100% are not likely to be optimal if payers have the ability to negotiate prices as part of the agreement.  相似文献   

9.
We introduce a two‐period Stackelberg game of a supplier and buyer. We recognize that learning from manufacturing experience has many advantages. Consistent with the literature, we assume both the buyer and supplier realize reductions in their respective production costs in period 2 due to volume‐based learning from period 1 production. In addition, we introduce another learning concept, the future value, to capture the buyer's benefits of transferring current manufacturing experience for the design and development of future products and technologies. In contrast to the literature, we allow the supplier two mechanisms to impact the buyer's outsourcing decision: price and the investment in integration process improvement (IPI) that reduces the buyer's unit cost of integration. IPI may include the investment in new materials, specialized technology, or the re‐design of the integration process. Conditions are given whereby the buyer partially outsources component demand as opposed to fully outsourcing or fully producing in‐house. Furthermore, conditions are given characterizing when the supplier's price and investment in IPI are substitute strategies versus complements. Both analytic and numerical results are presented.  相似文献   

10.
丁涛  梁樑 《中国管理科学》2016,24(8):132-138
在多属性决策问题中,不同的属性权重会产生不同的评价结果。由于实际问题的复杂性与不确定性,决策者对于属性权重的确定也存在不确定性。这些不确定既来自现实问题的复杂性和可变性,也来自决策者选择的模糊性与随机性。目前已有的研究主要是将不确定的权重信息转化为相对确定的信息(如转化为区间数等),硬性地消除了不确定,从而给决策结果带来较大风险。本文从方案排序的视角出发,研究在权重空间下,方案的占优关系和排序的稳健性。首先,定义了占优矩阵用于刻画不确定权重信息下方案两两比较的占优关系;其次,分析了方案的排序区间,即在所有可能存在的权重组合下,方案的最好排序和最差排序。然后,定义了方案的全排序排序概率,并且给出了排序概率的计算方法。进而,我们给出了方法的决策步骤和实施过程。最后,本文将该方法应用到某远洋集团的港口评估当中。  相似文献   

11.
Speed is an increasingly important determinant of which suppliers will be given customers' business and is defined as the time between when an order is placed by the customer and when the product is delivered, or as the amount of time customers must wait before they receive their desired service. In either case, the speed a customer experiences can be enhanced by giving priority to that particular customer. Such a prioritization scheme will necessarily reduce the speed experienced by lower‐priority customers, but this can lead to a better outcome when different customers place different values on speed. We model a single resource (e.g., a manufacturer) that processes jobs from customers who have heterogeneous waiting costs. We analyze the price that maximizes priority revenue for the resource owner (i.e., supplier, manufacturer) under different assumptions regarding customer behavior. We discover that a revenue‐maximizing supplier facing self‐interested customers (i.e., those that independently minimize their own expected costs) charges a price that also minimizes the expected total delay costs across all customers and that this outcome does not result when customers coordinate to submit priority orders at a level that seeks to minimize their aggregate costs of priority fees and delays. Thus, the customers are better off collectively (as is the supplier) when the supplier and customers act independently in their own best interests. Finally, as the number of priority classes increases, both the priority revenues and the overall customer delay costs improve, but at a decreasing rate.  相似文献   

12.
We develop a search‐theoretic model of financial intermediation in an over‐the‐counter market and study how trading frictions affect the distribution of asset holdings and standard measures of liquidity. A distinctive feature of our theory is that it allows for unrestricted asset holdings, so market participants can accommodate trading frictions by adjusting their asset positions. We show that these individual responses of asset demands constitute a fundamental feature of illiquid markets: they are a key determinant of trade volume, bid–ask spreads, and trading delays—the dimensions of market liquidity that search‐based theories seek to explain.  相似文献   

13.
Paul F. Deisler  Jr. 《Risk analysis》1997,17(6):797-806
A Score Comparison Method (SCM), for use in comparative risk projects, is described. It provides a degree of analytical guidance for those undertaking to integrate environmental issues which have been placed into separate, qualitative rankings according to different types of risk into a single, qualitative, integrated risk ranking. Its use in an actual case is shown.  相似文献   

14.
Lance Collinet  Colin Firer   《Omega》2003,31(6):523-538
This study analyses the relative performance of general equity unit trusts from 1980 to 1999 using a database that has been verified for accuracy and is free of survivorship bias. It characterises the behaviour of performance persistence in order to explain the conflicting results of previous persistence studies.A positive but weak relationship was found between past and future performance rankings. As the holding period lengthens, the persistence results became more sensitive to the beginning date and ending date of the period under examination.Regardless of the ending date chosen, persistence of winning funds and losing funds was evident when holding periods of 6 months were used. Persistence was particularly evident during the 1995–1999 period. However, even in this period, there were situations where rankings from one holding period to the next appeared random and situations where rankings reversed.Although individual unit trusts did not perform consistently over multiple holding periods, when using a trading strategy of buying the top performing fund over the last 6 months and holding it for 6 months, it was shown that, in most cases, an investor would have earned a return over 5 years that beat the average return of all general equity unit trusts after taking switching costs into account.  相似文献   

15.
The regime of excellence – manifested in journal rankings and research assessments – is coming to increasing prominence in the contemporary university. Critical scholars have responded to the encroaching ideology of excellence in various ways: while some seek to defend such measures of academic performance on the grounds that they provide accountability and transparency in place of elitism and privilege, others have criticized their impact on scholarship. The present paper contributes to the debate by exploring the relationship between the regime of excellence and critical management studies (CMS). Drawing on extensive interviews with CMS professors, we show how the regime of excellence is eroding the ethos of critical scholars. As a result, decisions about what to research and where to publish are increasingly being made according to the diktats of research assessments, journal rankings and managing editors of premier outlets. This suggests that CMS researchers may find themselves inadvertently aiding and abetting the rise of managerialism in the university sector, which raises troubling questions about the future of critical scholarship in the business school.  相似文献   

16.
17.
Conflict coaching with recently appointed managers The author discusses conflict coaching with recently appointed managers. It is a basic premise that newcomers first have to prove as managers in the eyes of their colleagues, otherwise a great variety of complications may emerge. These complications are determined by the way the manager is recruited, by the situation of his predecessor and by the specific organizational task. Accordingly, in coaching processes these different conflict eventualities have to be dealed with in different ways.  相似文献   

18.
Kun Xie  Kaan Ozbay  Hong Yang  Di Yang 《Risk analysis》2019,39(6):1342-1357
The widely used empirical Bayes (EB) and full Bayes (FB) methods for before–after safety assessment are sometimes limited because of the extensive data needs from additional reference sites. To address this issue, this study proposes a novel before–after safety evaluation methodology based on survival analysis and longitudinal data as an alternative to the EB/FB method. A Bayesian survival analysis (SARE) model with a random effect term to address the unobserved heterogeneity across sites is developed. The proposed survival analysis method is validated through a simulation study before its application. Subsequently, the SARE model is developed in a case study to evaluate the safety effectiveness of a recent red‐light‐running photo enforcement program in New Jersey. As demonstrated in the simulation and the case study, the survival analysis can provide valid estimates using only data from treated sites, and thus its results will not be affected by the selection of defective or insufficient reference sites. In addition, the proposed approach can take into account the censored data generated due to the transition from the before period to the after period, which has not been previously explored in the literature. Using individual crashes as units of analysis, survival analysis can incorporate longitudinal covariates such as the traffic volume and weather variation, and thus can explicitly account for the potential temporal heterogeneity.  相似文献   

19.
Decision making in food safety is a complex process that involves several criteria of different nature like the expected reduction in the number of illnesses, the potential economic or health-related cost, or even the environmental impact of a given policy or intervention. Several multicriteria decision analysis (MCDA) algorithms are currently used, mostly individually, in food safety to rank different options in a multifactorial environment. However, the selection of the MCDA algorithm is a decision problem on its own because different methods calculate different rankings. The aim of this study was to compare the impact of different uncertainty sources on the rankings of MCDA problems in the context of food safety. For that purpose, a previously published data set on emerging zoonoses in the Netherlands was used to compare different MCDA algorithms: MMOORA, TOPSIS, VIKOR, WASPAS, and ELECTRE III. The rankings were calculated with and without considering uncertainty (using fuzzy sets), to assess the importance of this factor. The rankings obtained differed between algorithms, emphasizing that the selection of the MCDA method had a relevant impact in the rankings. Furthermore, considering uncertainty in the ranking had a high influence on the results. Both factors were more relevant than the weights associated with each criterion in this case study. A hierarchical clustering method was suggested to aggregate results obtained by the different algorithms. This complementary step seems to be a promising way to decrease extreme difference among algorithms and could provide a strong added value in the decision-making process.  相似文献   

20.
This paper considers a panel data model for predicting a binary outcome. The conditional probability of a positive response is obtained by evaluating a given distribution function (F) at a linear combination of the predictor variables. One of the predictor variables is unobserved. It is a random effect that varies across individuals but is constant over time. The semiparametric aspect is that the conditional distribution of the random effect, given the predictor variables, is unrestricted. This paper has two results. If the support of the observed predictor variables is bounded, then identification is possible only in the logistic case. Even if the support is unbounded, so that (from Manski (1987)) identification holds quite generally, the information bound is zero unless F is logistic. Hence consistent estimation at the standard pn rate is possible only in the logistic case.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号