首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Kenny Crump 《Risk analysis》2020,40(4):696-704
Glyphosate is a widely used herbicide worldwide. The International Agency for Research on Cancer in 2015 declared that glyphosate is probably carcinogenic to humans, noting a positive association for non-Hodgkin lymphoma (NHL). The principal human data on glyphosate and NHL come from five case–control studies and two cohort studies. The case–control studies are at risk of recall bias resulting from information on exposure to pesticides being collected from cases and controls based on their memories. In addition, two of the case–control studies are additionally at risk of a form of selection bias that can exacerbate the effect of recall bias. Both biases are in the direction of making glyphosate appear carcinogenic. If odds ratios (ORs) are not biased and a pesticide plays no role in causing NHL, the probability that an OR for that pesticide is greater than 1.0 is approximately 0.5. The fractions of ORs for pesticides other than glyphosate that are greater than 1.0 in the case–control studies are 0.90 (n = 92), 0.90 (n = 152), 0.93 (n = 59), 0.76 (n = 140), and 0.53 (n = 54), the first two from studies that are at risk for both types of bias. In the two cohort studies, which are not subject to these biases, the comparable fractions for relative risks for all cancers are 0.51 (n = 70) and 0.48 (n = 158). These results comply closely with what would be expected if evidence for carcinogenicity of glyphosate in these studies results from statistical bias in the case–control studies.  相似文献   

2.
This paper describes a case assignment (calendaring) algorithm for a multi-judge appellate court system. In the algorithm, cases of unequal work content are selected for assignment to one of m panels (or clusters) from a set of N available cases. Each panel of cases is heard by a team of three judges. Each appellate case has an estimated work load and a priority ranking based on the type of appeal and filing date with the court. The algorithm balances both the total work load and the number of cases assigned to each panel while insuring that the highest priority cases are assigned to those available. The assignment problem is normally capacity constrained in that not all of the N cases can be assigned to one of the m panels on the monthly calendar. The algorithm is based on a neighborhood search and bounding principle that continually improves upon an initial feasible solution. Empirical results are presented to demonstrate the effectiveness and efficiency of the algorithm.  相似文献   

3.
In this paper we consider the constant rank unconstrained quadratic 0-1 optimization problem, CR-QP01 for short. This problem consists in minimizing the quadratic function 〈x, Ax〉 + 〈c, x〉 over the set {0,1} n where c is a vector in ℝ n and A is a symmetric real n × n matrix of constant rank r. We first present a pseudo-polynomial algorithm for solving the problem CR-QP01, which is known to be NP-hard already for r = 1. We then derive two new classes of special cases of the CR-QP01 which can be solved in polynomial time. These classes result from further restrictions on the matrix A. Finally we compare our algorithm with the algorithm of Allemand et al. (2001) for the CR-QP01 with negative semidefinite A and extend the range of applicability of the latter algorithm. It turns out that neither of the two algorithms dominates the other with respect to the class of instances which can be solved in polynomial time.  相似文献   

4.
The 2-INTERVAL PATTERN problem is to find the largest constrained pattern in a set of 2-intervals. The constrained pattern is a subset of the given 2-intervals such that any pair of them are R-comparable, where model . The problem stems from the study of general representation of RNA secondary structures. In this paper, we give three improved algorithms for different models. Firstly, an O(n{log} n +L) algorithm is proposed for the case , where is the total length of all 2-intervals (density d is the maximum number of 2-intervals over any point). This improves previous O(n 2log n) algorithm. Secondly, we use dynamic programming techniques to obtain an O(nlog n + dn) algorithm for the case R = { <, ⊏ }, which improves previous O(n 2) result. Finally, we present another algorithm for the case with disjoint support(interval ground set), which improves previous O(n 2n) upper bound. A preliminary version of this article appears in Proceedings of the 16th Annual International Symposium on Algorithms and Computation, Springer LNCS, Vol. 3827, pp. 412–421, Hainan, China, December 19–21, 2005.  相似文献   

5.
On the generalized constrained longest common subsequence problems   总被引:1,自引:1,他引:0  
We investigate four variants of the longest common subsequence problem. Given two sequences X, Y and a constrained pattern P of lengths m, n, and ρ, respectively, the generalized constrained longest common subsequence (GC-LCS) problems are to find a longest common subsequence of X and Y including (or excluding) P as a subsequence (or substring). We propose new dynamic programming algorithms for solving the GC-LCS problems in O(mn ρ) time. We also consider the case where the number of constrained patterns is arbitrary.  相似文献   

6.
In the binary single constraint Knapsack Problem, denoted KP, we are given a knapsack of fixed capacity c and a set of n items. Each item j, j = 1,...,n, has an associated size or weight wj and a profit pj. The goal is to determine whether or not item j, j = 1,...,n, should be included in the knapsack. The objective is to maximize the total profit without exceeding the capacity c of the knapsack. In this paper, we study the sensitivity of the optimum of the KP to perturbations of either the profit or the weight of an item. We give approximate and exact interval limits for both cases (profit and weight) and propose several polynomial time algorithms able to reach these interval limits. The performance of the proposed algorithms are evaluated on a large number of problem instances.  相似文献   

7.
Prevention of the emergence and spread of foodborne diseases is an important prerequisite for the improvement of public health. Source attribution models link sporadic human cases of a specific illness to food sources and animal reservoirs. With the next generation sequencing technology, it is possible to develop novel source attribution models. We investigated the potential of machine learning to predict the animal reservoir from which a bacterial strain isolated from a human salmonellosis case originated based on whole-genome sequencing. Machine learning methods recognize patterns in large and complex data sets and use this knowledge to build models. The model learns patterns associated with genetic variations in bacteria isolated from the different animal reservoirs. We selected different machine learning algorithms to predict sources of human salmonellosis cases and trained the model with Danish Salmonella Typhimurium isolates sampled from broilers (n = 34), cattle (n = 2), ducks (n = 11), layers (n = 4), and pigs (n = 159). Using cgMLST as input features, the model yielded an average accuracy of 0.783 (95% CI: 0.77–0.80) in the source prediction for the random forest and 0.933 (95% CI: 0.92–0.94) for the logit boost algorithm. Logit boost algorithm was most accurate (valid accuracy: 92%, CI: 0.8706–0.9579) and predicted the origin of 81% of the domestic sporadic human salmonellosis cases. The most important source was Danish produced pigs (53%) followed by imported pigs (16%), imported broilers (6%), imported ducks (2%), Danish produced layers (2%), Danish produced cattle and imported cattle (<1%) while 18% was not predicted. Machine learning has potential for improving source attribution modeling based on sequence data. Results of such models can inform risk managers to identify and prioritize food safety interventions.  相似文献   

8.
In previous study on comparing the makespan of the schedule allowed to be preempted at most i times and that of the optimal schedule with unlimited number of preemptions, the worst case ratio was usually obtained by analyzing the structures of the optimal schedules. For m identical machines case, the worst case ratio was shown to be 2m/(m+i+1) for any 0≤im?1 (Braun and Schmidt in SIAM J. Comput. 32(3):671–680, 2003), and they showed that LPT algorithm is an exact algorithm which can guarantee the worst case ratio for i=0. In this paper, we propose a simpler method which is based on the design and analysis of the algorithm and finding an instance in the worst case. It can not only obtain the worst case ratio but also give a linear algorithm which can guarantee this ratio for any 0≤im?1, and thus we generalize the previous results. We also make a discussion on the trade-off between the objective value and the number of preemptions. In addition, we consider the i-preemptive scheduling on two uniform machines. For both i=0 and i=1, we give two linear algorithms and present the worst-case ratios with respect to s, i.e., the ratio of the speeds of two machines.  相似文献   

9.
This paper presents an improved algorithm for solving the sum of linear fractional functions (SOLF) problem in 1-D and 2-D. A key subproblem to our solution is the off-line ratio query (OLRQ) problem, which asks to find the optimal values of a sequence of m linear fractional functions (called ratios), each ratio subject to a feasible domain defined by O(n) linear constraints. Based on some geometric properties and the parametric linear programming technique, we develop an algorithm that solves the OLRQ problem in O((m+n)log (m+n)) time. The OLRQ algorithm can be used to speed up every iteration of a known iterative SOLF algorithm, from O(m(m+n)) time to O((m+n)log (m+n)), in 1-D and 2-D. Implementation results of our improved 1-D and 2-D SOLF algorithm have shown that in most cases it outperforms the commonly-used approaches for the SOLF problem. We also apply our techniques to some problems in computational geometry and other areas, improving the previous results.This research was supported in part by the National Science Foundation under Grant CCR-9623585.The research of this author was supported in part by National Science Foundation under grant CCF-0430366.Grant-in-Aid of Ministry of Science, Culture and Education of Japan, No. 10780274.The research of this author was supported in part by the Ministry of Education, Science, Sports and Culture, Grant-in-Aid for Scientific Researchon Priority Areas  相似文献   

10.
The topic of this paper is inference in models in which parameters are defined by moment inequalities and/or equalities. The parameters may or may not be identified. This paper introduces a new class of confidence sets and tests based on generalized moment selection (GMS). GMS procedures are shown to have correct asymptotic size in a uniform sense and are shown not to be asymptotically conservative. The power of GMS tests is compared to that of subsampling, m out of n bootstrap, and “plug‐in asymptotic” (PA) tests. The latter three procedures are the only general procedures in the literature that have been shown to have correct asymptotic size (in a uniform sense) for the moment inequality/equality model. GMS tests are shown to have asymptotic power that dominates that of subsampling, m out of n bootstrap, and PA tests. Subsampling and m out of n bootstrap tests are shown to have asymptotic power that dominates that of PA tests.  相似文献   

11.
In this paper, we present a new model for RNA multiple sequence structural alignment based on the longest common subsequence. We consider both the off-line and on-line cases. For the off-line case, i.e., when the longest common subsequence is given as a linear graph with n vertices, we first present a polynomial O(n 2) time algorithm to compute its maximum nested loop. We then consider a slightly different problem—the Maximum Loop Chain problem and present an algorithm which runs in O(n 5) time. For the on-line case, i.e., given m RNA sequences of lengths n, compute the longest common subsequence of them such that this subsequence either induces a maximum nested loop or the maximum number of matches, we present efficient algorithms using dynamic programming when m is small. This research is partially supported by EPSCOR Visiting Scholar's Program and MSU Short-term Professional Development Program.  相似文献   

12.
Local to unity limit theory is used in applications to construct confidence intervals (CIs) for autoregressive roots through inversion of a unit root test (Stock (1991)). Such CIs are asymptotically valid when the true model has an autoregressive root that is local to unity (ρ = 1 + c/n), but are shown here to be invalid at the limits of the domain of definition of the localizing coefficient c because of a failure in tightness and the escape of probability mass. Failure at the boundary implies that these CIs have zero asymptotic coverage probability in the stationary case and vicinities of unity that are wider than O(n−1/3). The inversion methods of Hansen (1999) and Mikusheva (2007) are asymptotically valid in such cases. Implications of these results for predictive regression tests are explored. When the predictive regressor is stationary, the popular Campbell and Yogo (2006) CIs for the regression coefficient have zero coverage probability asymptotically, and their predictive test statistic Q erroneously indicates predictability with probability approaching unity when the null of no predictability holds. These results have obvious cautionary implications for the use of the procedures in empirical practice.  相似文献   

13.
A simple analytic solution to the dynamic version of Haber's law was derived, conditional on a specified toxic load exponent (n) and on exponential decline in environmental toxicant concentration. Such conditions are particularly relevant to assessing ecotoxicity risk posed (e.g., to juvenile salmonids) by agricultural organophosphate (OP) pesticides that are subject to degradation and/or dissipation. A dynamic Haber's law model was fit to previously published detailed data on lethality for two aquatic species induced by six agricultural OP pesticides, and more crude fits were obtained to less detailed data on five other OP and on two non‐OP pesticides, indicating that for lethality, a range of 0.5 ≤ n ≤ 1.5 may be typical for OP pesticides. The AgDRIFT® stream deposition model was next used to establish that first‐order or exponential loss, with dilution half‐times on the order of ≤0.01 days, pertains approximately to pesticide residues in streams that arise after aerial application of agricultural pesticides 100 feet upwind. The analytic model was then applied to demonstrate that pesticide concentrations deposited in downwind streams following an aerial application are effectively diluted by about 50‐ to 300‐fold from their initial concentration. Riparian ecotoxicity risk assessment models that ignore this effective dilution, and base pesticide‐specific estimates of reduced survival on the initial concentrations, are therefore unrealistically conservative.  相似文献   

14.
An asymptotic theory is developed for nonlinear regression with integrated processes. The models allow for nonlinear effects from unit root time series and therefore deal with the case of parametric nonlinear cointegration. The theory covers integrable and asymptotically homogeneous functions. Sufficient conditions for weak consistency are given and a limit distribution theory is provided. The rates of convergence depend on the properties of the nonlinear regression function, and are shown to be as slow as n1/4 for integrable functions, and to be generally polynomial in n1/2 for homogeneous functions. For regressions with integrable functions, the limiting distribution theory is mixed normal with mixing variates that depend on the sojourn time of the limiting Brownian motion of the integrated process.  相似文献   

15.
This paper is our third step towards developing a theory of testing monomials in multivariate polynomials and concentrates on two problems: (1) How to compute the coefficients of multilinear monomials; and (2) how to find a maximum multilinear monomial when the input is a ΠΣΠ polynomial. We first prove that the first problem is #P-hard and then devise a O ?(3 n s(n)) upper bound for this problem for any polynomial represented by an arithmetic circuit of size s(n). Later, this upper bound is improved to O ?(2 n ) for ΠΣΠ polynomials. We then design fully polynomial-time randomized approximation schemes for this problem for ΠΣ polynomials. On the negative side, we prove that, even for ΠΣΠ polynomials with terms of degree ≤2, the first problem cannot be approximated at all for any approximation factor ≥1, nor “weakly approximated” in a much relaxed setting, unless P=NP. For the second problem, we first give a polynomial time λ-approximation algorithm for ΠΣΠ polynomials with terms of degrees no more a constant λ≥2. On the inapproximability side, we give a n (1??)/2 lower bound, for any ?>0, on the approximation factor for ΠΣΠ polynomials. When terms in these polynomials are constrained to degrees ≤2, we prove a 1.0476 lower bound, assuming P≠NP; and a higher 1.0604 lower bound, assuming the Unique Games Conjecture.  相似文献   

16.
The independence number of a graph and its chromatic number are known to be hard to approximate. Due to recent complexity results, unless coRP = NP, there is no polynomial time algorithm which approximates any of these quantities within a factor of n 1– for graphs on n vertices.We show that the situation is significantly better for the average case. For every edge probability p = p(n) in the range n –1/2+ p 3/4, we present an approximation algorithm for the independence number of graphs on n vertices, whose approximation ratio is O((np)1/2/log n) and whose expected running time over the probability space G(n, p) is polynomial. An algorithm with similar features is described also for the chromatic number.A key ingredient in the analysis of both algorithms is a new large deviation inequality for eigenvalues of random matrices, obtained through an application of Talagrand's inequality.  相似文献   

17.
This paper presents a solution to an important econometric problem, namely the root n consistent estimation of nonlinear models with measurement errors in the explanatory variables, when one repeated observation of each mismeasured regressor is available. While a root n consistent estimator has been derived for polynomial specifications (see Hausman, Ichimura, Newey, and Powell (1991)), such an estimator for general nonlinear specifications has so far not been available. Using the additional information provided by the repeated observation, the suggested estimator separates the measurement error from the “true” value of the regressors thanks to a useful property of the Fourier transform: The Fourier transform converts the integral equations that relate the distribution of the unobserved “true” variables to the observed variables measured with error into algebraic equations. The solution to these equations yields enough information to identify arbitrary moments of the “true,” unobserved variables. The value of these moments can then be used to construct any estimator that can be written in terms of moments, including traditional linear and nonlinear least squares estimators, or general extremum estimators. The proposed estimator is shown to admit a representation in terms of an influence function, thus establishing its root n consistency and asymptotic normality. Monte Carlo evidence and an application to Engel curve estimation illustrate the usefulness of this new approach.  相似文献   

18.
Two Hamiltonian paths are said to be fully independent if the ith vertices of both paths are distinct for all i between 1 and n, where n is the number of vertices of the given graph. Hamiltonian paths in a set are said to be mutually fully independent if two arbitrary Hamiltonian paths in the set are fully independent. On the other hand, two Hamiltonian cycles are independent starting at v if both cycles start at a common vertex v and the ith vertices of both cycles are distinct for all i between 2 and n. Hamiltonian cycles in a set are said to be mutually independent starting at v if any two different cycles in the set are independent starting at v. The n-dimensional hypercube is widely used as the architecture for parallel machines. In this paper, we study its fault-tolerant property and show that an n-dimensional hypercube with at most n−2 faulty edges can embed a set of fault-free mutually fully independent Hamiltonian paths between two adjacent vertices, and can embed a set of fault-free mutually independent Hamiltonian cycles starting at a given vertex. The number of tolerable faulty edges is optimal with respect to a worst case. An extended abstract of this paper appeared in Proceedings of the 2006 International Conference on Innovative Computing, Information and Control (ICICIC), pp. 288–292, IEEE Computer Society Press.  相似文献   

19.
Given a set S of starting vertices and a set T of terminating vertices in a graph G = (V,E) with non-negative weights on edges, the minimum Steiner network problem is to find a subgraph of G with the minimum total edge weight. In such a subgraph, we require that for each vertex s S and t T, there is a path from s to a terminating vertex as well as a path from a starting vertex to t. This problem can easily be proven NP-hard. For solving the minimum Steiner network problem, we first present an algorithm that runs in time and space that both are polynomial in n with constant degrees, but exponential in |S|+|T|, where n is the number of vertices in G. Then we present an algorithm that uses space that is quadratic in n and runs in time that is polynomial in n with a degree O(max {max {|S|,|T|}–2,min {|S|,|T|}–1}). In spite of this degree, we prove that the number of Steiner vertices in our solution can be as large as |S|+|T|–2. Our algorithm can enumerate all possible optimal solutions. The input graph G can either be undirected or directed acyclic. We also give a linear time algorithm for the special case when min {|S|,|T|} = 1 and max {|S|,|T|} = 2.The minimum union paths problem is similar to the minimum Steiner network problem except that we are given a set H of hitting vertices in G in addition to the sets of starting and terminating vertices. We want to find a subgraph of G with the minimum total edge weight such that the conditions required by the minimum Steiner network problem are satisfied as well as the condition that every hitting vertex is on a path from a starting vertex to a terminating vertex. Furthermore, G must be directed acyclic. For solving the minimum union paths problem, we also present algorithms that have a time and space tradeoff similar to algorithms for the minimum Steiner network problem. We also give a linear time algorithm for the special case when |S| = 1, |T| = 1 and |H| = 2.An extended abstract of part of this paper appears in Hsu et al. (1996).Supported in part by the National Science Foundation under Grants CCR-9309743 and INT-9207212, and by the Office of Naval Research under Grant No. N00014-93-1-0272.Supported in part by the National Science Council, Taiwan, ROC, under Grant No. NSC-83-0408-E-001-021.  相似文献   

20.
Semidefinite programming (SDP) relaxations are proving to be a powerful tool for finding tight bounds for hard discrete optimization problems. This is especially true for one of the easier NP-hard problems, the Max-Cut problem (MC). The well-known SDP relaxation for Max-Cut, here denoted SDP1, can be derived by a first lifting into matrix space and has been shown to be excellent both in theory and in practice.Recently the present authors have derived a new relaxation using a second lifting. This new relaxation, denoted SDP2, is strictly tighter than the relaxation obtained by adding all the triangle inequalities to the well-known relaxation. In this paper we present new results that further describe the remarkable tightness of this new relaxation. Let denote the feasible set of SDP2 for the complete graph with n nodes, let F n denote the appropriately defined projection of into , the space of real symmetric n × n matrices, and let C n denote the cut polytope in . Further let be the matrix variable of the SDP2 relaxation and X F n be its projection. Then for the complete graph on 3 nodes, F 3 = C 3 holds. We prove that the rank of the matrix variable of SDP2 completely characterizes the dimension of the face of the cut polytope in which the corresponding matrix X lies. This shows explicitly the connection between the rank of the variable Y of the second lifting and the possible locations of the projected matrix X within C 3. The results we prove for n = 3 cast further light on how SDP2 captures all the structure of C 3, and furthermore they are stepping stones for studying the general case n 4. For this case, we show that the characterization of the vertices of the cut polytope via rank Y = 1 extends to all n 4. More interestingly, we show that the characterization of the one-dimensional faces via rank Y = 2 also holds for n 4. Furthermore, we prove that if rank Y = 2 for n 3, then a simple algorithm exhibits the two rank-one matrices (corresponding to cuts) which are the vertices of the one-dimensional face of the cut polytope where X lies.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号