首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 250 毫秒
1.
Model averaging for dichotomous dose–response estimation is preferred to estimate the benchmark dose (BMD) from a single model, but challenges remain regarding implementing these methods for general analyses before model averaging is feasible to use in many risk assessment applications, and there is little work on Bayesian methods that include informative prior information for both the models and the parameters of the constituent models. This article introduces a novel approach that addresses many of the challenges seen while providing a fully Bayesian framework. Furthermore, in contrast to methods that use Monte Carlo Markov Chain, we approximate the posterior density using maximum a posteriori estimation. The approximation allows for an accurate and reproducible estimate while maintaining the speed of maximum likelihood, which is crucial in many applications such as processing massive high throughput data sets. We assess this method by applying it to empirical laboratory dose–response data and measuring the coverage of confidence limits for the BMD. We compare the coverage of this method to that of other approaches using the same set of models. Through the simulation study, the method is shown to be markedly superior to the traditional approach of selecting a single preferred model (e.g., from the U.S. EPA BMD software) for the analysis of dichotomous data and is comparable or superior to the other approaches.  相似文献   

2.
This paper develops a framework for performing estimation and inference in econometric models with partial identification, focusing particularly on models characterized by moment inequalities and equalities. Applications of this framework include the analysis of game‐theoretic models, revealed preference restrictions, regressions with missing and corrupted data, auction models, structural quantile regressions, and asset pricing models. Specifically, we provide estimators and confidence regions for the set of minimizers ΘI of an econometric criterion function Q(θ). In applications, the criterion function embodies testable restrictions on economic models. A parameter value θthat describes an economic model satisfies these restrictions if Q(θ) attains its minimum at this value. Interest therefore focuses on the set of minimizers, called the identified set. We use the inversion of the sample analog, Qn(θ), of the population criterion, Q(θ), to construct estimators and confidence regions for the identified set, and develop consistency, rates of convergence, and inference results for these estimators and regions. To derive these results, we develop methods for analyzing the asymptotic properties of sample criterion functions under set identification.  相似文献   

3.
We provide a tractable characterization of the sharp identification region of the parameter vector θ in a broad class of incomplete econometric models. Models in this class have set‐valued predictions that yield a convex set of conditional or unconditional moments for the observable model variables. In short, we call these models with convex moment predictions. Examples include static, simultaneous‐move finite games of complete and incomplete information in the presence of multiple equilibria; best linear predictors with interval outcome and covariate data; and random utility models of multinomial choice in the presence of interval regressors data. Given a candidate value for θ, we establish that the convex set of moments yielded by the model predictions can be represented as the Aumann expectation of a properly defined random set. The sharp identification region of θ, denoted ΘI, can then be obtained as the set of minimizers of the distance from a properly specified vector of moments of random variables to this Aumann expectation. Algorithms in convex programming can be exploited to efficiently verify whether a candidate θ is in ΘI. We use examples analyzed in the literature to illustrate the gains in identification and computational tractability afforded by our method.  相似文献   

4.
The alleviation of food-borne diseases caused by microbial pathogen remains a great concern in order to ensure the well-being of the general public. The relation between the ingested dose of organisms and the associated infection risk can be studied using dose-response models. Traditionally, a model selected according to a goodness-of-fit criterion has been used for making inferences. In this article, we propose a modified set of fractional polynomials as competitive dose-response models in risk assessment. The article not only shows instances where it is not obvious to single out one best model but also illustrates that model averaging can best circumvent this dilemma. The set of candidate models is chosen based on biological plausibility and rationale and the risk at a dose common to all these models estimated using the selected models and by averaging over all models using Akaike's weights. In addition to including parameter estimation inaccuracy, like in the case of a single selected model, model averaging accounts for the uncertainty arising from other competitive models. This leads to a better and more honest estimation of standard errors and construction of confidence intervals for risk estimates. The approach is illustrated for risk estimation at low dose levels based on Salmonella typhi and Campylobacter jejuni data sets in humans. Simulation studies indicate that model averaging has reduced bias, better precision, and also attains coverage probabilities that are closer to the 95% nominal level compared to best-fitting models according to Akaike information criterion.  相似文献   

5.
There is a need to advance our ability to characterize the risk of inhalational anthrax following a low‐dose exposure. The exposure scenario most often considered is a single exposure that occurs during an attack. However, long‐term daily low‐dose exposures also represent a realistic exposure scenario, such as what may be encountered by people occupying areas for longer periods. Given this, the objective of the current work was to model two rabbit inhalational anthrax dose‐response data sets. One data set was from single exposures to aerosolized Bacillus anthracis Ames spores. The second data set exposed rabbits repeatedly to aerosols of B. anthracis Ames spores. For the multiple exposure data the cumulative dose (i.e., the sum of the individual daily doses) was used for the model. Lethality was the response for both. Modeling was performed using Benchmark Dose Software evaluating six models: logprobit, loglogistic, Weibull, exponential, gamma, and dichotomous‐Hill. All models produced acceptable fits to either data set. The exponential model was identified as the best fitting model for both data sets. Statistical tests suggested there was no significant difference between the single exposure exponential model results and the multiple exposure exponential model results, which suggests the risk of disease is similar between the two data sets. The dose expected to cause 10% lethality was 15,600 inhaled spores and 18,200 inhaled spores for the single exposure and multiple exposure exponential dose‐response model, respectively, and the 95% lower confidence intervals were 9,800 inhaled spores and 9,200 inhaled spores, respectively.  相似文献   

6.
Polynomial regression models have applications in the social sciences and in business research. Unfortunately, such models have a high degree of multicollinearity that creates problems with the statistical assessment of the model. In fact, the collinearity may be so severe that it could lead to an incorrect conclusion that some of the terms in the model are not statistically significant and should therefore be omitted from the model. This note provides a simple transformation to achieve orthogonality in polynomial models between the linear and quadratic terms, thereby eliminating the collinearity problem. It also shows that the same procedure does not achieve orthogonality for higher-order terms. An example data set is analyzed to show the benefits of such a procedure.  相似文献   

7.
This paper examines the efficient estimation of partially identified models defined by moment inequalities that are convex in the parameter of interest. In such a setting, the identified set is itself convex and hence fully characterized by its support function. We provide conditions under which, despite being an infinite dimensional parameter, the support function admits √n‐consistent regular estimators. A semiparametric efficiency bound is then derived for its estimation, and it is shown that any regular estimator attaining it must also minimize a wide class of asymptotic loss functions. In addition, we show that the “plug‐in” estimator is efficient, and devise a consistent bootstrap procedure for estimating its limiting distribution. The setting we examine is related to an incomplete linear model studied in Beresteanu and Molinari (2008) and Bontemps, Magnac, and Maurin (2012), which further enables us to establish the semiparametric efficiency of their proposed estimators for that problem.  相似文献   

8.
黄金作为重要的避险资产,对其价格波动的定量描述和预测对于各类投资者的风险管理决策意义重大。基于标准回归预测模型,采用主成分分析、组合预测和两种主流的模型缩减方法(Elastic net 和Lasso)构建新的波动率预测模型,探究哪种方法能够更有效地利用多个预测因子信息。进一步,运用模型信度集合(model confidence set,MCS)、样本外R2和方向测试(Direction-of-Change,DoC)三种评价方法检验新模型的样本外预测精度。实证结果显示:不论是基于哪一种评价方法,相比其它竞争模型,两种缩减模型的样本外预测精度均为最优,可以为我国黄金期货价格的波动率预测提供可靠保障。  相似文献   

9.
This paper considers inference in a broad class of nonregular models. The models considered are nonregular in the sense that standard test statistics have asymptotic distributions that are discontinuous in some parameters. It is shown in Andrews and Guggenberger (2009a) that standard fixed critical value, subsampling, and m out of n bootstrap methods often have incorrect asymptotic size in such models. This paper introduces general methods of constructing tests and confidence intervals that have correct asymptotic size. In particular, we consider a hybrid subsampling/fixed‐critical‐value method and size‐correction methods. The paper discusses two examples in detail. They are (i) confidence intervals in an autoregressive model with a root that may be close to unity and conditional heteroskedasticity of unknown form and (ii) tests and confidence intervals based on a post‐conservative model selection estimator.  相似文献   

10.
This paper provides computationally intensive, yet feasible methods for inference in a very general class of partially identified econometric models. Let P denote the distribution of the observed data. The class of models we consider is defined by a population objective function Q(θ, P) for θΘ. The point of departure from the classical extremum estimation framework is that it is not assumed that Q(θ, P) has a unique minimizer in the parameter space Θ. The goal may be either to draw inferences about some unknown point in the set of minimizers of the population objective function or to draw inferences about the set of minimizers itself. In this paper, the object of interest is Θ0(P)=argminθΘQ(θ, P), and so we seek random sets that contain this set with at least some prespecified probability asymptotically. We also consider situations where the object of interest is the image of Θ0(P) under a known function. Random sets that satisfy the desired coverage property are constructed under weak assumptions. Conditions are provided under which the confidence regions are asymptotically valid not only pointwise in P, but also uniformly in P. We illustrate the use of our methods with an empirical study of the impact of top‐coding outcomes on inferences about the parameters of a linear regression. Finally, a modest simulation study sheds some light on the finite‐sample behavior of our procedure.  相似文献   

11.
We compare the regulatory implications of applying the traditional (linearized) and exact two-stage dose–response models to animal carcinogenic data. We analyze dose–response data from six studies, representing five different substances, and we determine the goodness-of-fit of each model as well as the 95% confidence lower limit of the dose corresponding to a target excess risk of 10–5 (the target risk dose TRD). For the two concave datasets, we find that the exact model gives a substantially better fit to the data than the traditional model, and that the exact model gives a TRD that is an order of magnitude lower than that given by the traditional model. In the other cases, the exact model gives a fit equivalent to or better than the traditional model. We also show that although the exact two-stage model may exhibit dose–response concavity at moderate dose levels, it is always linear or sublinear, and never supralinear, in the low-dose limit. Because regulatory concern is almost always confined to the low-dose region extrapolation, supralinear behavior seems not to be of regulatory concern in the exact two-stage model. Finally, we find that when performing this low-dose extrapolation in cases of dose–response concavity, extrapolating the model fit leads to a more conservative TRD than taking a linear extrapolation from 10% excess risk. We conclude with a set of recommendations.  相似文献   

12.
West  R. Webster  Kodell  Ralph L. 《Risk analysis》1999,19(3):453-459
Methods of quantitative risk assessment for toxic responses that are measured on a continuous scale are not well established. Although risk-assessment procedures that attempt to utilize the quantitative information in such data have been proposed, there is no general agreement that these procedures are appreciably more efficient than common quantal dose–response procedures that operate on dichotomized continuous data. This paper points out an equivalence between the dose–response models of the nonquantal approach of Kodell and West(1) and a quantal probit procedure, and provides results from a Monte Carlo simulation study to compare coverage probabilities of statistical lower confidence limits on dose corresponding to specified additional risk based on applying the two procedures to continuous data from a dose–response experiment. The nonquantal approach is shown to be superior, in terms of both statistical validity and statistical efficiency.  相似文献   

13.
开放经济中的货币政策   总被引:3,自引:1,他引:3  
本文在融入垄断竞争和名义刚性的两国动态一般均衡模型中,分析了定价方式和国际货币政策设计问题。在允许部分厂商采用不同于Calvo交错价格调整的后顾性定价法则的情况下,可以推导出关于短期产出缺口和通货膨胀之间的新的均衡关系,于是在开放经济的条件下就可以得到类似泰勒法则的最优利率政策。此时,最优利率的设定不仅受到国内通货膨胀、产出缺口的影响,而且还和国外产出波动的情况相关。最后,我们以中美两国的实际数据为样本对理论结论进行了实证检验。  相似文献   

14.
This paper introduces a novel bootstrap procedure to perform inference in a wide class of partially identified econometric models. We consider econometric models defined by finitely many weak moment inequalities, 2 We can also admit models defined by moment equalities by combining pairs of weak moment inequalities.
which encompass many applications of economic interest. The objective of our inferential procedure is to cover the identified set with a prespecified probability. 3 We deal with the objective of covering each element of the identified set with a prespecified probability in Bugni (2010a).
We compare our bootstrap procedure, a competing asymptotic approximation, and subsampling procedures in terms of the rate at which they achieve the desired coverage level, also known as the error in the coverage probability. Under certain conditions, we show that our bootstrap procedure and the asymptotic approximation have the same order of error in the coverage probability, which is smaller than that obtained by using subsampling. This implies that inference based on our bootstrap and asymptotic approximation should eventually be more precise than inference based on subsampling. A Monte Carlo study confirms this finding in a small sample simulation.  相似文献   

15.
In an earlier issue of Decision Sciences, Jesse, Mitra, and Cox [1] examined the impact of inflationary conditions on the economic order quantity (EOQ) formula. Specifically, the authors analyzed the effect of inflation on order quantity decisions by means of a model that takes into account both inflationary trends and time discounting (over an infinite time horizon). In their analysis, the authors utilized two models: Current-dollars model and Constant-dollars model. These models were derived, of course, by setting up a total cost equation in the usual manner then finding the optimum order quantity that minimizes the total cost. Jesse, Mitra, and Cox [1] found that EOQ is approximately the same under both conditions; with or without inflation. However, we disagree with the conclusion drawn by [2] and show that EOQ will be different under inflationary conditions, provided that the inflationary conditions are properly accounted for in the formulation of the total cost model.  相似文献   

16.
Attributing foodborne illnesses to food sources is essential to conceive, prioritize, and assess the impact of public health policy measures. The Bayesian microbial subtyping attribution model by Hald et al. is one of the most advanced approaches to attribute sporadic cases; it namely allows taking into account the level of exposure to the sources and the differences between bacterial types and between sources. This step forward requires introducing type and source‐dependent parameters, and generates overparameterization, which was addressed in Hald's paper by setting some parameters to constant values. We question the impact of the choices made for the parameterization (parameters set and values used) on model robustness and propose an alternative parameterization for the Hald model. We illustrate this analysis with the 2005 French data set of non‐typhi Salmonella. Mullner's modified Hald model and a simple deterministic model were used to compare the results and assess the accuracy of the estimates. Setting the parameters for bacterial types specific to a unique source instead of the most frequent one and using data‐based values instead of arbitrary values enhanced the convergence and adequacy of the estimates and led to attribution estimates consistent with the other models’ results. The type and source parameters estimates were also coherent with Mullner's model estimates. The model appeared to be highly sensitive to parameterization. The proposed solution based on specific types and data‐based values improved the robustness of estimates and enabled the use of this highly valuable tool successfully with the French data set.  相似文献   

17.
Fang and Qi (Optim. Methods Softw. 18:143–165, 2003) introduced a new generalized network flow model called manufacturing network flow model for manufacturing process modeling. A key distinguishing feature of such models is the assembling of component raw-materials, in a given proportion, into an end-product. This assembling operation cannot be modeled using usual generalized networks (which allow gains and losses in flows), or using multi-commodity networks (which allow flows of multiple commodity types on a single arc). The authors developed a network-simplex-based algorithm to solve a minimum cost flow problem formulated on such a generalized network and indicated systems of linear equations that need to be solved during the course of the network-simplex-based solution procedure. In this paper, it is first shown how various steps of the network-simplex-based solution procedure can be performed efficiently using appropriate data structures. Further, it is also shown how the resulting system of linear equations can be solved directly on the generalized network.  相似文献   

18.
Comparison of Six Dose-Response Models for Use with Food-Borne Pathogens   总被引:6,自引:0,他引:6  
Food-related illness in the United States is estimated to affect over six million people per year and cost the economy several billion dollars. These illnesses and costs could be reduced if minimum infectious doses were established and used as the basis of regulations and monitoring. However, standard methodologies for dose-response assessment are not yet formulated for microbial risk assessment. The objective of this study was to compare dose-response models for food-borne pathogens and determine which models were most appropriate for a range of pathogens. The statistical models proposed in the literature and chosen for comparison purposes were log-normal, log-logistic, exponential, -Poisson and Weibull-Gamma. These were fit to four data sets also taken from published literature, Shigella flexneri, Shigella dysenteriae,Campylobacter jejuni, and Salmonella typhosa, using the method of maximum likelihood. The Weibull-gamma, the only model with three parameters, was also the only model capable of fitting all the data sets examined using the maximum likelihood estimation for comparisons. Infectious doses were also calculated using each model. Within any given data set, the infectious dose estimated to affect one percent of the population ranged from one order of magnitude to as much as nine orders of magnitude, illustrating the differences in extrapolation of the dose response models. More data are needed to compare models and examine extrapolation from high to low doses for food-borne pathogens.  相似文献   

19.
在现实的很多信用评估问题中,由于对样本进行类别标记需要花费大量的人力、财力和物力,往往只能获取少量有类别标签的样本来训练分类模型,而把数据库中大量无类别标签的客户样本舍弃。为解决这一问题,本研究引入半监督学习技术,并将其与多分类器集成技术中的随机子空间方法(Random Subspace, RSS)相结合,构建了类别不平衡环境下基于RSS的半监督协同训练模型RSSCI。该模型主要包括三个阶段:1)使用RSS方法训练得到若干基本分类器;2)从大量无类别标签数据集中选择性标记一部分最合适的样本加入到原始训练集中;3)在最终的训练集上训练分类模型,并对测试集样本进行分类。在三个客户信用评估数据集上进行实证分析,结果表明,RSSCI模型的信用评估性能不仅优于常用的监督式集成信用评估模型,也优于已有的一些半监督协同训练信用评估模型。  相似文献   

20.
Data snooping occurs when a given set of data is used more than once for purposes of inference or model selection. When such data reuse occurs, there is always the possibility that any satisfactory results obtained may simply be due to chance rather than to any merit inherent in the method yielding the results. This problem is practically unavoidable in the analysis of time‐series data, as typically only a single history measuring a given phenomenon of interest is available for analysis. It is widely acknowledged by empirical researchers that data snooping is a dangerous practice to be avoided, but in fact it is endemic. The main problem has been a lack of sufficiently simple practical methods capable of assessing the potential dangers of data snooping in a given situation. Our purpose here is to provide such methods by specifying a straightforward procedure for testing the null hypothesis that the best model encountered in a specification search has no predictive superiority over a given benchmark model. This permits data snooping to be undertaken with some degree of confidence that one will not mistake results that could have been generated by chance for genuinely good results.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号