首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
For dose–response analysis in quantitative microbial risk assessment (QMRA), the exact beta‐Poisson model is a two‐parameter mechanistic dose–response model with parameters and , which involves the Kummer confluent hypergeometric function. Evaluation of a hypergeometric function is a computational challenge. Denoting as the probability of infection at a given mean dose d, the widely used dose–response model is an approximate formula for the exact beta‐Poisson model. Notwithstanding the required conditions and , issues related to the validity and approximation accuracy of this approximate formula have remained largely ignored in practice, partly because these conditions are too general to provide clear guidance. Consequently, this study proposes a probability measure Pr(0 < r < 1 | , ) as a validity measure (r is a random variable that follows a gamma distribution; and are the maximum likelihood estimates of α and β in the approximate model); and the constraint conditions for as a rule of thumb to ensure an accurate approximation (e.g., Pr(0 < r < 1 | , ) >0.99) . This validity measure and rule of thumb were validated by application to all the completed beta‐Poisson models (related to 85 data sets) from the QMRA community portal (QMRA Wiki). The results showed that the higher the probability Pr(0 < r < 1 | , ), the better the approximation. The results further showed that, among the total 85 models examined, 68 models were identified as valid approximate model applications, which all had a near perfect match to the corresponding exact beta‐Poisson model dose–response curve.  相似文献   

2.
The error estimate of Borgonovo's moment‐independent index is considered, and it shows that the possible computational complexity of is mainly due to the probability density function (PDF) estimate because the PDF estimate is an ill‐posed problem and its convergence rate is quite slow. So it reminds us to compute Borgonovo's index using other methods. To avoid the PDF estimate, , which is based on the PDF, is first approximatively represented by the cumulative distribution function (CDF). The CDF estimate is well posed and its convergence rate is always faster than that of the PDF estimate. From the representation, a stable approach is proposed to compute with an adaptive procedure. Since the small probability multidimensional integral needs to be computed in this procedure, a computational strategy named asymptotic space integration is introduced to reduce a high‐dimensional integral to a one‐dimensional integral. Then we can compute the small probability multidimensional integral by adaptive numerical integration in one dimension with an improved convergence rate. From the comparison of numerical error analysis of some examples, it can be shown that the proposed method is an effective approach to uncertainty importance measure computation.  相似文献   

3.
Quantitative microbial risk assessment (QMRA) is widely accepted for characterizing the microbial risks associated with food, water, and wastewater. Single‐hit dose‐response models are the most commonly used dose‐response models in QMRA. Denoting as the probability of infection at a given mean dose d, a three‐parameter generalized QMRA beta‐Poisson dose‐response model, , is proposed in which the minimum number of organisms required for causing infection, Kmin, is not fixed, but a random variable following a geometric distribution with parameter . The single‐hit beta‐Poisson model, , is a special case of the generalized model with Kmin = 1 (which implies ). The generalized beta‐Poisson model is based on a conceptual model with greater detail in the dose‐response mechanism. Since a maximum likelihood solution is not easily available, a likelihood‐free approximate Bayesian computation (ABC) algorithm is employed for parameter estimation. By fitting the generalized model to four experimental data sets from the literature, this study reveals that the posterior median estimates produced fall short of meeting the required condition of = 1 for single‐hit assumption. However, three out of four data sets fitted by the generalized models could not achieve an improvement in goodness of fit. These combined results imply that, at least in some cases, a single‐hit assumption for characterizing the dose‐response process may not be appropriate, but that the more complex models may be difficult to support especially if the sample size is small. The three‐parameter generalized model provides a possibility to investigate the mechanism of a dose‐response process in greater detail than is possible under a single‐hit model.  相似文献   

4.
Matthew Revie 《Risk analysis》2011,31(7):1120-1132
Traditional statistical procedures for estimating the probability of an event result in an estimate of zero when no events are realized. Alternative inferential procedures have been proposed for the situation where zero events have been realized but often these are ad hoc, relying on selecting methods dependent on the data that have been realized. Such data‐dependent inference decisions violate fundamental statistical principles, resulting in estimation procedures whose benefits are difficult to assess. In this article, we propose estimating the probability of an event occurring through minimax inference on the probability that future samples of equal size realize no more events than that in the data on which the inference is based. Although motivated by inference on rare events, the method is not restricted to zero event data and closely approximates the maximum likelihood estimate (MLE) for nonzero data. The use of the minimax procedure provides a risk adverse inferential procedure where there are no events realized. A comparison is made with the MLE and regions of the underlying probability are identified where this approach is superior. Moreover, a comparison is made with three standard approaches to supporting inference where no event data are realized, which we argue are unduly pessimistic. We show that for situations of zero events the estimator can be simply approximated with , where n is the number of trials.  相似文献   

5.
We superimpose a radiation fallout model onto a traffic flow model to assess the evacuation versus shelter‐in‐place decisions after the daytime ground‐level detonation of a 10‐kt improvised nuclear device in Washington, DC. In our model, ≈80k people are killed by the prompt effects of blast, burn, and radiation. Of the ≈360k survivors without access to a vehicle, 42.6k would die if they immediately self‐evacuated on foot. Sheltering above ground would save several thousand of these lives and sheltering in a basement (or near the middle of a large building) would save of them. Among survivors of the prompt effects with access to a vehicle, the number of deaths depends on the fraction of people who shelter in a basement rather than self‐evacuate in their vehicle: 23.1k people die if 90% shelter in a basement and 54.6k die if 10% shelter. Sheltering above ground saves approximately half as many lives as sheltering in a basement. The details related to delayed (i.e., organized) evacuation, search and rescue, decontamination, and situational awareness (via, e.g., telecommunications) have very little impact on the number of casualties. Although antibiotics and transfusion support have the potential to save ≈10k lives (and the number of lives saved from medical care increases with the fraction of people who shelter in basements), the logistical challenge appears to be well beyond current response capabilities. Taken together, our results suggest that the government should initiate an aggressive outreach program to educate citizens and the private sector about the importance of sheltering in place in a basement for at least 12 hours after a terrorist nuclear detonation.  相似文献   

6.
Microbiological food safety is an important economic and health issue in the context of globalization and presents food business operators with new challenges in providing safe foods. The hazard analysis and critical control point approach involve identifying the main steps in food processing and the physical and chemical parameters that have an impact on the safety of foods. In the risk‐based approach, as defined in the Codex Alimentarius, controlling these parameters in such a way that the final products meet a food safety objective (FSO), fixed by the competent authorities, is a big challenge and of great interest to the food business operators. Process risk models, issued from the quantitative microbiological risk assessment framework, provide useful tools in this respect. We propose a methodology, called multivariate factor mapping (MFM), for establishing a link between process parameters and compliance with a FSO. For a stochastic and dynamic process risk model of in soft cheese made from pasteurized milk with many uncertain inputs, multivariate sensitivity analysis and MFM are combined to (i) identify the critical control points (CCPs) for throughout the food chain and (ii) compute the critical limits of the most influential process parameters, located at the CCPs, with regard to the specific process implemented in the model. Due to certain forms of interaction among parameters, the results show some new possibilities for the management of microbiological hazards when a FSO is specified.  相似文献   

7.
Few studies have focused on the different roles risk factors play in the multistate temporal natural course of breast cancer. We proposed a three‐state Markov regression model to predict the risk from free of breast cancer (FBC) to the preclinical screen‐detectable phase (PCDP) and from the PCDP to the clinical phase (CP). We searched the initiators and promoters affecting onset and subsequent progression of breast tumor to build up a three‐state temporal natural history model with state‐dependent genetic and environmental covariates. This risk assessment model was applied to a 1 million Taiwanese women cohort. The proposed model was verified by external validation with another independent data set. We identified three kinds of initiators, including the BRCA gene, seven single nucleotides polymorphism, and breast density. ER, Ki‐67, and HER‐2 were found as promoters. Body mass index and age at first pregnancy both played a role. Among women carrying the BRCA gene, the 10‐year predicted risk for the transition from FBC to CP was 25.83%, 20.31%, and 13.84% for the high‐, intermediate‐, and low‐risk group, respectively. The corresponding figures were 1.55%, 1.22%, and 0.76% among noncarriers. The mean sojourn time of staying at the PCDP ranged from 0.82 years for the highest risk group to 6.21 years for the lowest group. The lack of statistical significance for external validation () revealed the adequacy of our proposed model. The three‐state model with state‐dependent covariates of initiators and promoters was proposed for achieving individually tailored screening and also for personalized clinical surveillance of early breast cancer.  相似文献   

8.
This article investigates how different decisions can be reached when decision makers consult a binary rating system and a scale rating system. Since typically decision makers use rating information to make binary decisions, it is particularly important to compare the scale system to the binary system. We show that the only N‐point scale system that reports a rater's opinion consistently with the binary system is one where N is odd and is not divisible by 4. At the aggregate level, however, we illustrate that inconsistencies persist regardless of the choice of N. In addition, we provide simple tools that can determine whether the systems lead decision makers to the same decision outcomes.  相似文献   

9.
Firms often determine whether or not to make components common across products by focusing on the manufacturing and sales of new products only. However, component commonality decisions that ignore remanufacturing can adversely affect the profitability of the firm. In this article we analyze how remanufacturing could reverse the OEM's commonality decision that is based on the manufacturing and sales of new products only. Specifically, we determine the conditions under which the OEM's optimal decision on commonality may be reversed and illustrate how her profit can be significantly higher if remanufacturing is taken into account ex ante. We illustrate the implementation of our model for two products in the Apple iPad family.  相似文献   

10.
In this paper we consider the multidimensional binary vector assignment problem. An input of this problem is defined by m disjoint multisets \(V^1, V^2, \ldots , V^m\), each composed of n binary vectors of size p. An output is a set of n disjoint m-tuples of vectors, where each m-tuple is obtained by picking one vector from each multiset \(V^i\). To each m-tuple we associate a p dimensional vector by applying the bit-wise AND operation on the m vectors of the tuple. The objective is to minimize the total number of zeros in these n vectors. We denote this problem by Open image in new window , and the restriction of this problem where every vector has at most c zeros by Open image in new window . Open image in new window was only known to be Open image in new window -hard, even for Open image in new window . We show that, assuming the unique games conjecture, it is Open image in new window -hard to Open image in new window -approximate Open image in new window for any fixed Open image in new window and Open image in new window . This result is tight as any solution is a Open image in new window -approximation. We also prove without assuming UGC that Open image in new window is Open image in new window -hard even for Open image in new window . Finally, we show that Open image in new window is polynomial-time solvable for fixed Open image in new window (which cannot be extended to Open image in new window ).  相似文献   

11.
We study a two‐product inventory model that allows substitution. Both products can be used to supply demand over a selling season of N periods, with a one‐time replenishment opportunity at the beginning of the season. A substitution could be offered even when the demanded product is available. The substitution rule is flexible in the sense that the seller can choose whether or not to offer substitution and at what price or discount level, and the customer may or may not accept the offer, with the acceptance probability being a decreasing function of the substitution price. The decisions are the replenishment quantities at the beginning of the season, and the dynamic substitution‐pricing policy in each period of the season. Using a stochastic dynamic programming approach, we present a complete solution to the problem. Furthermore, we show that the objective function is concave and submodular in the inventory levels—structural properties that facilitate the solution procedure and help identify threshold policies for the optimal substitution/pricing decisions. Finally, with a state transformation, we also show that the objective function is ‐concave, which allows us to derive similar structural properties of the optimal policy for multiple‐season problems.  相似文献   

12.
Detecting abnormal events is one of the fundamental issues in wireless sensor networks (WSNs). In this paper, we investigate \((\alpha ,\tau )\)-monitoring in WSNs. For a given monitored threshold \(\alpha \), we prove that (i) the tight upper bound of \(\Pr [{S(t)} \ge \alpha ]\) is \(O\left( {\exp \left\{ { - n\ell \left( {\frac{\alpha }{{nsup}},\frac{{\mu (t)}}{{nsup}}} \right) } \right\} } \right) \), if \(\mu (t) < \alpha \); and (ii) the tight upper bound of \(\Pr [{S(t)} \le \alpha ]\) is \(O\left( {\exp \left\{ { - n\ell \left( {\frac{\alpha }{{nsup}},\frac{{\mu (t)}}{{nsup}}} \right) } \right\} } \right) \), if \(\mu (t) > \alpha \), where \(\Pr [X]\) is the probability of random event \(X,\, S(t)\) is the sum of the monitored area at time \(t,\, n\) is the number of the sensor nodes, \(sup\) is the upper bound of sensed data, \( \mu (t)\) is the expectation of \(S(t)\), and \(\ell ({x_1},{x_2}) = {x_1}\ln \left( {\frac{{{x_1}}}{{{x_2}}}} \right) + (1 - {x_1})\ln \left( {\frac{{1 - {x_1}}}{{1 - {x_2}}}} \right) \). An instant \((\alpha ,\tau )\)-monitoring scheme is then developed based on the upper bound. Moreover, approximate continuous \((\alpha , \tau )\)-monitoring is investigated. We prove that the probability of false negative alarm is \(\delta \), if the sample size is Open image in new window for a given precision requirement, where Open image in new window is the Open image in new window fractile of a standard normal distribution. Finally, the performance of the proposed algorithms is validated through experiments.  相似文献   

13.
We consider a scheduling problem where machines need to be rented from the cloud in order to process jobs. There are two types of machines available which can be rented for machine-type dependent prices and for arbitrary durations. However, a machine-type dependent setup time is required before a machine is available for processing. Jobs arrive online over time, have deadlines and machine-type dependent sizes. The objective is to rent machines and schedule jobs so as to meet all deadlines while minimizing the rental cost. As we observe the slack of jobs to have a fundamental influence on the competitiveness, we parameterize instances by their (minimum) slack. An instance is called to have a slack of \(\beta \) if, for all jobs, the difference between the job’s release time and the latest point in time at which it needs to be started is at least \(\beta \). While for \(\beta < s\) no finite competitiveness is possible, our main result is an online algorithm for \(\beta = (1+\varepsilon )s\) with Open image in new window , where s denotes the largest setup time. Its competitiveness only depends on \(\varepsilon \) and the cost ratio of the machine types and is proven to be optimal up to a factor of Open image in new window .  相似文献   

14.
Patterned self-assembly tile set synthesis (pats) aims at minimizing the number of distinct DNA tile types used to self-assemble a given rectangular color pattern. For an integer kk-pats is the subproblem of pats that restricts input patterns to those with at most k colors. We give an efficient Open image in new window verifier, and based on that, we establish a manually-checkable proof for the NP-hardness of 11-pats; the best previous manually-checkable proof is for 29-pats.  相似文献   

15.
Many firms make significant investments into developing and managing knowledge within their supply chains. Such investments are often prudent because studies indicate that supply chain knowledge (SCK) has a positive influence on performance. Key questions still surround the SCK–performance relationship, however. First, what is the overall relationship between SCK and performance? Second, under what conditions is the relationship stronger or weaker? To address these questions, we applied meta‐analysis to 35 studies of the SCK–performance relationship that collectively include more than 8,400 firms. Our conservative estimate is that the effect size of the overall relationship is  = .39. We also find that the SCK–performance relationship is stronger when (i) examining operational performance, (ii) gathering data from more than one supply chain node, (iii) gathering data from multiple countries, (iv) examining service industries, and (v) among more recently published studies. We also found that studies that embraced a single theory base (as opposed to using multiple ones) had a stronger SCK–performance relationship. Looking to the future, our meta‐analysis highlights the need for studies to (i) include lags between the measurement of SCK and performance, (ii) gather upstream data when examining innovation, (iii) examine SCK within emerging countries, and (iv) provide much more information relative to the nuances of the SCK examined.  相似文献   

16.
In an inflationary economy of declinin R
D expenditures, effective transfer of innovative ideas from the literature to the laboratory can significantly complement an R
D budget. This paper discusses an efective library management organization and practical mechanisms for improving technology transfer from the corporate library. Experience at Sanders Associates Inc. with the techniques presented had been most favourable. Substantial savings in R
D dollars have been realized.  相似文献   

17.
The network choice revenue management problem models customers as choosing from an offer set, and the firm decides the best subset to offer at any given moment to maximize expected revenue. The resulting dynamic program for the firm is intractable and approximated by a deterministic linear program called the CDLP which has an exponential number of columns. However, under the choice‐set paradigm when the segment consideration sets overlap, the CDLP is difficult to solve. Column generation has been proposed but finding an entering column has been shown to be NP‐hard. In this study, starting with a concave program formulation called SDCP that is based on segment‐level consideration sets, we add a class of constraints called product constraints (σPC), that project onto subsets of intersections. In addition, we propose a natural direct tightening of the SDCP called , and compare the performance of both methods on the benchmark data sets in the literature. In our computational testing on the data sets, 2PC achieves the CDLP value at a fraction of the CPU time taken by column generation. For a large network our 2PC procedure runs under 70 seconds to come within 0.02% of the CDLP value, while column generation takes around 1 hour; for an even larger network with 68 legs, column generation does not converge even in 10 hours for most of the scenarios while 2PC runs under 9 minutes. Thus we believe our approach is very promising for quickly approximating CDLP when segment consideration sets overlap and the consideration sets themselves are relatively small.  相似文献   

18.
This paper studies the nonparametric identification of the first‐price auction model with risk averse bidders within the private value paradigm. First, we show that the benchmark model is nonindentified from observed bids. We also derive the restrictions imposed by the model on observables and show that these restrictions are weak. Second, we establish the nonparametric identification of the bidders' utility function under exclusion restrictions. Our primary exclusion restriction takes the form of an exogenous bidders' participation, leading to a latent distribution of private values that is independent of the number of bidders. The key idea is to exploit the property that the bid distribution varies with the number of bidders while the private value distribution does not. We then extend these results to endogenous bidders' participation when the exclusion restriction takes the form of instruments that do not affect the bidders' private value distribution. Though derived for a benchmark model, our results extend to more general cases such as a binding reserve price, affiliated private values, and asymmetric bidders. Last, possible estimation methods are proposed.  相似文献   

19.
Entropy is a classical statistical concept with appealing properties. Establishing asymptotic distribution theory for smoothed nonparametric entropy measures of dependence has so far proved challenging. In this paper, we develop an asymptotic theory for a class of kernel‐based smoothed nonparametric entropy measures of serial dependence in a time‐series context. We use this theory to derive the limiting distribution of Granger and Lin's (1994) normalized entropy measure of serial dependence, which was previously not available in the literature. We also apply our theory to construct a new entropy‐based test for serial dependence, providing an alternative to Robinson's (1991) approach. To obtain accurate inferences, we propose and justify a consistent smoothed bootstrap procedure. The naive bootstrap is not consistent for our test. Our test is useful in, for example, testing the random walk hypothesis, evaluating density forecasts, and identifying important lags of a time series. It is asymptotically locally more powerful than Robinson's (1991) test, as is confirmed in our simulation. An application to the daily S&P 500 stock price index illustrates our approach.  相似文献   

20.
We study the asymptotic distribution of three‐step estimators of a finite‐dimensional parameter vector where the second step consists of one or more nonparametric regressions on a regressor that is estimated in the first step. The first‐step estimator is either parametric or nonparametric. Using Newey's (1994) path‐derivative method, we derive the contribution of the first‐step estimator to the influence function. In this derivation, it is important to account for the dual role that the first‐step estimator plays in the second‐step nonparametric regression, that is, that of conditioning variable and that of argument.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号