首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 312 毫秒
1.
2.
Humans are continuously exposed to chemicals with suspected or proven endocrine disrupting chemicals (EDCs). Risk management of EDCs presents a major unmet challenge because the available data for adverse health effects are generated by examining one compound at a time, whereas real‐life exposures are to mixtures of chemicals. In this work, we integrate epidemiological and experimental evidence toward a whole mixture strategy for risk assessment. To illustrate, we conduct the following four steps in a case study: (1) identification of single EDCs (“bad actors”)—measured in prenatal blood/urine in the SELMA study—that are associated with a shorter anogenital distance (AGD) in baby boys; (2) definition and construction of a “typical” mixture consisting of the “bad actors” identified in Step 1; (3) experimentally testing this mixture in an in vivo animal model to estimate a dose–response relationship and determine a point of departure (i.e., reference dose [RfD]) associated with an adverse health outcome; and (4) use a statistical measure of “sufficient similarity” to compare the experimental RfD (from Step 3) to the exposure measured in the human population and generate a “similar mixture risk indicator” (SMRI). The objective of this exercise is to generate a proof of concept for the systematic integration of epidemiological and experimental evidence with mixture risk assessment strategies. Using a whole mixture approach, we could find a higher rate of pregnant women under risk (13%) when comparing with the data from more traditional models of additivity (3%), or a compound‐by‐compound strategy (1.6%).  相似文献   

3.
This paper considers testing problems where several of the standard regularity conditions fail to hold. We consider the case where (i) parameter vectors in the null hypothesis may lie on the boundary of the maintained hypothesis and (ii) there may be a nuisance parameter that appears under the alternative hypothesis, but not under the null. The paper establishes the asymptotic null and local alternative distributions of quasi‐likelihood ratio, rescaled quasi‐likelihood ratio, Wald, and score tests in this case. The results apply to tests based on a wide variety of extremum estimators and apply to a wide variety of models. Examples treated in the paper are: (i) tests of the null hypothesis of no conditional heteroskedasticity in a GARCH(1, 1) regression model and (ii) tests of the null hypothesis that some random coefficients have variances equal to zero in a random coefficients regression model with (possibly) correlated random coefficients.  相似文献   

4.
We analyze use of a quasi‐likelihood ratio statistic for a mixture model to test the null hypothesis of one regime versus the alternative of two regimes in a Markov regime‐switching context. This test exploits mixture properties implied by the regime‐switching process, but ignores certain implied serial correlation properties. When formulated in the natural way, the setting is nonstandard, involving nuisance parameters on the boundary of the parameter space, nuisance parameters identified only under the alternative, or approximations using derivatives higher than second order. We exploit recent advances by Andrews (2001) and contribute to the literature by extending the scope of mixture models, obtaining asymptotic null distributions different from those in the literature. We further provide critical values for popular models or bounds for tail probabilities that are useful in constructing conservative critical values for regime‐switching tests. We compare the size and power of our statistics to other useful tests for regime switching via Monte Carlo methods and find relatively good performance. We apply our methods to reexamine the classic cartel study of Porter (1983) and reaffirm Porter's findings.  相似文献   

5.
Federal and other regulatory agencies often use or claim to use a weight of evidence (WoE) approach in chemical evaluation. Their approaches to the use of WoE, however, differ significantly, rely heavily on subjective professional judgment, and merit improvement. We review uses of WoE approaches in key articles in the peer‐reviewed scientific literature, and find significant variations. We find that a hypothesis‐based WoE approach, developed by Lorenz Rhomberg et al., can provide a stronger scientific basis for chemical assessment while improving transparency and preserving the appropriate scope of professional judgment. Their approach, while still evolving, relies on the explicit specification of the hypothesized basis for using the information at hand to infer the ability of an agent to cause human health impacts or, more broadly, affect other endpoints of concern. We describe and endorse such a hypothesis‐based WoE approach to chemical evaluation.  相似文献   

6.
Kevin M. Crofton 《Risk analysis》2012,32(10):1784-1797
Traditional additivity models provide little flexibility in modeling the dose–response relationships of the single agents in a mixture. While the flexible single chemical required (FSCR) methods allow greater flexibility, its implicit nature is an obstacle in the formation of the parameter covariance matrix, which forms the basis for many statistical optimality design criteria. The goal of this effort is to develop a method for constructing the parameter covariance matrix for the FSCR models, so that (local) alphabetic optimality criteria can be applied. Data from Crofton et al. are provided as motivation; in an experiment designed to determine the effect of 18 polyhalogenated aromatic hydrocarbons on serum total thyroxine (T4), the interaction among the chemicals was statistically significant. Gennings et al. fit the FSCR interaction threshold model to the data. The resulting estimate of the interaction threshold was positive and within the observed dose region, providing evidence of a dose‐dependent interaction. However, the corresponding likelihood‐ratio‐based confidence interval was wide and included zero. In order to more precisely estimate the location of the interaction threshold, supplemental data are required. Using the available data as the first stage, the Ds‐optimal second‐stage design criterion was applied to minimize the variance of the hypothesized interaction threshold. Practical concerns associated with the resulting design are discussed and addressed using the penalized optimality criterion. Results demonstrate that the penalized Ds‐optimal second‐stage design can be used to more precisely define the interaction threshold while maintaining the characteristics deemed important in practice.  相似文献   

7.
This study tests the core hypotheses of Karasek's job demand-control model: high job demands (workload) in combination with low job control (autonomy) increase strains (job dissatisfaction; strain hypothesis), whereas high job demands in combination with high job control increase learning and development in the job (here: learning new skills in the first job; learning hypothesis). These hypotheses are tested in two ways: (a) the mere combination of both job characteristics is associated with the expected outcomes, and (b) a statistical interaction between both job characteristics in predicting the outcomes is expected. A large dataset (n=2,212) of young workers in their first job was used to test all hypotheses. As young workers are presumably still in the process of adjusting themselves to their work environment, we expected that the effects of work characteristics on work outcomes would be stronger for this group than for more experienced workers. The results confirm both the strain and the learning hypothesis. We found a combined effect of both job characteristics, as well as a statistical interaction between both variables. The lowest level of job satisfaction was found in the “high strain” job, whereas the highest increase in skills was found in the “active” job. The consequences of these findings for theory and practice are discussed.  相似文献   

8.
In this paper we reflect on two related questions. First, how have we arrived at a position where null hypothesis significance testing is the dominant criterion employed by quantitative researchers when deciding on whether or not a result is ‘significant'? Second, how might we change the practice of quantitative management research by promoting a greater plurality of methods, and in doing so better enable scholars to put phenomena before design? We conclude by arguing that quantitative management researchers need to focus on the epistemological issues surrounding the role of scholarly reasoning in justifying knowledge claims. By embracing a plurality of approaches to reasoning quantitative researchers will be better able to escape the straitjacket of null hypothesis significance testing and, in doing so, reorder their priorities by putting phenomena before design.  相似文献   

9.
采用不同的随机过程模型描述标的资产的价格动态,会极大的影响衍生品定价和风险管理活动。在文献中,同一资产采用的随机过程往往是不一致甚至是矛盾的。本文以GBM过程与OU过程为例,提出了一种统计推断方法,旨在从多个备选模型中选出能更好的描述标的资产价格动态的随机过程。该方法应用事后检验原理,将数据分成估计窗和检验窗,估计窗用来估计随机过程的参数,然后在模型参数不变的假定下,推导了原假设成立时检验窗各个时点的资产价格的样本外分布,看实际数据落在接受域或拒绝域的频数来判断是否接受原假设。本文以大宗商品、汇率、利率、股票作为标的资产,对随机过程选择进行了实证分析。实证结果表明,一些经常使用的随机过程模型并不一定是最优的模型。  相似文献   

10.
Kenny S. Crump 《Risk analysis》2017,37(10):1802-1807
In an article recently published in this journal, Bogen(1) concluded that an NRC committee's recommendations that default linear, nonthreshold (LNT) assumptions be applied to dose– response assessment for noncarcinogens and nonlinear mode of action carcinogens are not justified. Bogen criticized two arguments used by the committee for LNT: when any new dose adds to a background dose that explains background levels of risk (additivity to background or AB), or when there is substantial interindividual heterogeneity in susceptibility (SIH) in the exposed human population. Bogen showed by examples that SIH can be false. Herein is outlined a general proof that confirms Bogen's claim. However, it is also noted that SIH leads to a nonthreshold population distribution even if individual distributions all have thresholds, and that small changes to SIH assumptions can result in LNT. Bogen criticizes AB because it only applies when there is additivity to background, but offers no help in deciding when or how often AB holds. Bogen does not contradict the fact that AB can lead to LNT but notes that, even if low‐dose linearity results, the response at higher doses may not be useful in predicting the amount of low‐dose linearity. Although this is theoretically true, it seems reasonable to assume that generally there is some quantitative relationship between the low‐dose slope and the slope suggested at higher doses. Several incorrect or misleading statements by Bogen are noted.  相似文献   

11.
This paper considers issues related to estimation, inference, and computation with multiple structural changes that occur at unknown dates in a system of equations. Changes can occur in the regression coefficients and/or the covariance matrix of the errors. We also allow arbitrary restrictions on these parameters, which permits the analysis of partial structural change models, common breaks that occur in all equations, breaks that occur in a subset of equations, and so forth. The method of estimation is quasi‐maximum likelihood based on Normal errors. The limiting distributions are obtained under more general assumptions than previous studies. For testing, we propose likelihood ratio type statistics to test the null hypothesis of no structural change and to select the number of changes. Structural change tests with restrictions on the parameters can be constructed to achieve higher power when prior information is present. For computation, an algorithm for an efficient procedure is proposed to construct the estimates and test statistics. We also introduce a novel locally ordered breaks model, which allows the breaks in different equations to be related yet not occurring at the same dates.  相似文献   

12.
工期索赔问题是目前项目管理中的一类重要问题,针对这一问题出现了多种分析方法,但都不能被普遍接受,一个关键的难点在于多个工序发生延误时工序间相互作用的原理及其对总工期造成的影响尚未研究清楚,导致时差所有权不合理、延误责任分担不公平、索赔分析结果与实际情况不一致等问题。针对这些问题,本文基于时差理论提出了一种分析工期索赔的新方法。本文首先提出了工期延误中的"组合效应",指多个工序延误时总工期实际延误总量经常不等于每个延误工序单独推迟总工期的分量之和的现象,以组合效应研究为入手点,本文利用CPM网络的时差特性分析了延误工序间相互影响的原理,进而揭示了多工序延误时"组合效应"的规律;然后在此基础上根据组合效应的影响因素确定组合效应在各延误工序中的分摊比例,进一步得出各工序在工期索赔中应承担的责任,由此提出了基于时差的工期索赔分析方法;最后通过一个项目算例将这种方法与目前常用方法进行了比较。由于组合效应清晰地反映了延误工序之间的相互影响及其对总工期影响的内在规律,因此本文提出的方法责任分摊更加公平合理,更加符合实际情况,且借助时差参数实现程序化使其免于频繁的网络更新更便于应用,能够有效弥补目前工期索赔分析方法的不足,为项目管理人员提供一种工期索赔分析的有力工具。  相似文献   

13.
Wavelet analysis is a new mathematical method developed as a unified field of science over the last decade or so. As a spatially adaptive analytic tool, wavelets are useful for capturing serial correlation where the spectrum has peaks or kinks, as can arise from persistent dependence, seasonality, and other kinds of periodicity. This paper proposes a new class of generally applicable wavelet‐based tests for serial correlation of unknown form in the estimated residuals of a panel regression model, where error components can be one‐way or two‐way, individual and time effects can be fixed or random, and regressors may contain lagged dependent variables or deterministic/stochastic trending variables. Our tests are applicable to unbalanced heterogenous panel data. They have a convenient null limit N(0,1) distribution. No formulation of an alternative model is required, and our tests are consistent against serial correlation of unknown form even in the presence of substantial inhomogeneity in serial correlation across individuals. This is in contrast to existing serial correlation tests for panel models, which ignore inhomogeneity in serial correlation across individuals by assuming a common alternative, and thus have no power against the alternatives where the average of serial correlations among individuals is close to zero. We propose and justify a data‐driven method to choose the smoothing parameter—the finest scale in wavelet spectral estimation, making the tests completely operational in practice. The data‐driven finest scale automatically converges to zero under the null hypothesis of no serial correlation and diverges to infinity as the sample size increases under the alternative, ensuring the consistency of our tests. Simulation shows that our tests perform well in small and finite samples relative to some existing tests.  相似文献   

14.
Supposedly well-intentioned dictators are often cited as drivers of economic growth. We examine this claim in a panel of 133 countries from 1858 to 2010. Using annual data on economic growth, political regimes, and political leaders, we document a robust asymmetric pattern: growth-positive autocrats (autocrats whose countries experience larger-than-average growth) are found only as frequently as would be predicted by chance. In contrast, growth-negative autocrats are found significantly more frequently. Implementing regression discontinuity designs (RDD), we also examine local trends in the neighbourhood of the entry into power of growth-positive autocrats. We find that growth under supposedly growth-positive autocrats does not significantly differ from previous realizations of growth, suggesting that even the infrequent growth-positive autocrats largely “ride the wave” of previous success. On the other hand, our estimates reject the null hypothesis that growth-negative rulers have no effects. Taken together, our results cast serious doubt on the benevolent autocrat hypothesis.  相似文献   

15.
Human populations are generally exposed simultaneously to a number of toxicants present in the environment, including complex mixtures of unknown and variable origin. While scientific methods for evaluating the potential carcinogenic risks of pure compounds are relatively well established, methods for assessing the risks of complex mixtures are somewhat less developed. This article provides a report of a recent workshop on carcinogenic mixtures sponsored by the Committee on Toxicology of the U.S. National Research Council, in which toxicological, epidemiological, and statistical approaches to carcinogenic risk assessment for mixtures were discussed. Complex mixtures, such as diesel emissions and tobacco smoke, have been shown to have carcinogenic potential. Bioassay-directed fractionation based on short-term screening test for genotoxicity has also been used in identifying carcinogenic components of mixtures. Both toxicological and epidemiological studies have identified clear interactions between chemical carcinogens, including synergistic effects at moderate to high doses. To date, laboratory studies have demonstrated over 900 interactions involving nearly 200 chemical carcinogens. At lower doses, theoretical arguments suggest that risks may be near additive. Thus, additivity at low doses has been invoked as as a working hypothesis by regulatory authorities in the absence of evidence to the contrary. Future studies of the joint effects of carcinogenic agents may serve to elucidate the mechanisms by which interactions occur at higher doses.  相似文献   

16.
It is well known that the finite‐sample properties of tests of hypotheses on the co‐integrating vectors in vector autoregressive models can be quite poor, and that current solutions based on Bartlett‐type corrections or bootstrap based on unrestricted parameter estimators are unsatisfactory, in particular in those cases where also asymptotic χ2 tests fail most severely. In this paper, we solve this inference problem by showing the novel result that a bootstrap test where the null hypothesis is imposed on the bootstrap sample is asymptotically valid. That is, not only does it have asymptotically correct size, but, in contrast to what is claimed in existing literature, it is consistent under the alternative. Compared to the theory for bootstrap tests on the co‐integration rank (Cavaliere, Rahbek, and Taylor, 2012), establishing the validity of the bootstrap in the framework of hypotheses on the co‐integrating vectors requires new theoretical developments, including the introduction of multivariate Ornstein–Uhlenbeck processes with random (reduced rank) drift parameters. Finally, as documented by Monte Carlo simulations, the bootstrap test outperforms existing methods.  相似文献   

17.
Lay perceptions of risk appear rooted more in heuristics than in reason. A major concern of the risk regulation literature is that such “error‐strewn” perceptions may be replicated in policy, as governments respond to the (mis)fears of the citizenry. This has led many to advocate a relatively technocratic approach to regulating risk, characterized by high reliance on formal risk and cost‐benefit analysis. However, through two studies of chemicals regulation, we show that the formal assessment of risk is pervaded by its own set of heuristics. These include rules to categorize potential threats, define what constitutes valid data, guide causal inference, and to select and apply formal models. Some of these heuristics lay claim to theoretical or empirical justifications, others are more back‐of‐the‐envelope calculations, while still more purport not to reflect some truth but simply to constrain discretion or perform a desk‐clearing function. These heuristics can be understood as a way of authenticating or formalizing risk assessment as a scientific practice, representing a series of rules for bounding problems, collecting data, and interpreting evidence (a methodology). Heuristics are indispensable elements of induction. And so they are not problematic per se, but they can become so when treated as laws rather than as contingent and provisional rules. Pitfalls include the potential for systematic error, masking uncertainties, strategic manipulation, and entrenchment. Our central claim is that by studying the rules of risk assessment qua rules, we develop a novel representation of the methods, conventions, and biases of the prior art.  相似文献   

18.
West  R. Webster  Kodell  Ralph L. 《Risk analysis》1999,19(3):453-459
Methods of quantitative risk assessment for toxic responses that are measured on a continuous scale are not well established. Although risk-assessment procedures that attempt to utilize the quantitative information in such data have been proposed, there is no general agreement that these procedures are appreciably more efficient than common quantal dose–response procedures that operate on dichotomized continuous data. This paper points out an equivalence between the dose–response models of the nonquantal approach of Kodell and West(1) and a quantal probit procedure, and provides results from a Monte Carlo simulation study to compare coverage probabilities of statistical lower confidence limits on dose corresponding to specified additional risk based on applying the two procedures to continuous data from a dose–response experiment. The nonquantal approach is shown to be superior, in terms of both statistical validity and statistical efficiency.  相似文献   

19.
A nonparametric, residual‐based block bootstrap procedure is proposed in the context of testing for integrated (unit root) time series. The resampling procedure is based on weak assumptions on the dependence structure of the stationary process driving the random walk and successfully generates unit root integrated pseudo‐series retaining the important characteristics of the data. It is more general than previous bootstrap approaches to the unit root problem in that it allows for a very wide class of weakly dependent processes and it is not based on any parametric assumption on the process generating the data. As a consequence the procedure can accurately capture the distribution of many unit root test statistics proposed in the literature. Large sample theory is developed and the asymptotic validity of the block bootstrap‐based unit root testing is shown via a bootstrap functional limit theorem. Applications to some particular test statistics of the unit root hypothesis, i.e., least squares and Dickey‐Fuller type statistics are given. The power properties of our procedure are investigated and compared to those of alternative bootstrap approaches to carry out the unit root test. Some simulations examine the finite sample performance of our procedure.  相似文献   

20.
Epidemiology textbooks often interpret population attributable fractions based on 2 x 2 tables or logistic regression models of exposure-response associations as preventable fractions, i.e., as fractions of illnesses in a population that would be prevented if exposure were removed. In general, this causal interpretation is not correct, since statistical association need not indicate causation; moreover, it does not identify how much risk would be prevented by removing specific constituents of complex exposures. This article introduces and illustrates an approach to calculating useful bounds on preventable fractions, having valid causal interpretations, from the types of partial but useful molecular epidemiological and biological information often available in practice. The method applies probabilistic risk assessment concepts from systems reliability analysis, together with bounding constraints for the relationship between event probabilities and causation (such as that the probability that exposure X causes response Y cannot exceed the probability that exposure X precedes response Y, or the probability that both X and Y occur) to bound the contribution to causation from specific causal pathways. We illustrate the approach by estimating an upper bound on the contribution to lung cancer risk made by a specific, much-discussed causal pathway that links smoking to a polycyclic aromatic hydrocarbon (PAH) (specifically, benzo(a)pyrene diol epoxide-DNA) adducts at hot spot codons at p53 in lung cells. The result is a surprisingly small preventable fraction (of perhaps 7% or less) for this pathway, suggesting that it will be important to consider other mechanisms and non-PAH constituents of tobacco smoke in designing less risky tobacco-based products.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号