首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 937 毫秒
1.
We consider the problem of estimating the probability of detection (POD) of flaws in an industrial steel component. Modeled as an increasing function of the flaw height, the POD characterizes the detection process; it is also involved in the estimation of the flaw size distribution, a key input parameter of physical models describing the behavior of the steel component when submitted to extreme thermodynamic loads. Such models are used to assess the resistance of highly reliable systems whose failures are seldom observed in practice. We develop a Bayesian method to estimate the flaw size distribution and the POD function, using flaw height measures from periodic in‐service inspections conducted with an ultrasonic detection device, together with measures from destructive lab experiments. Our approach, based on approximate Bayesian computation (ABC) techniques, is applied to a real data set and compared to maximum likelihood estimation (MLE) and a more classical approach based on Markov Chain Monte Carlo (MCMC) techniques. In particular, we show that the parametric model describing the POD as the cumulative distribution function (cdf) of a log‐normal distribution, though often used in this context, can be invalidated by the data at hand. We propose an alternative nonparametric model, which assumes no predefined shape, and extend the ABC framework to this setting. Experimental results demonstrate the ability of this method to provide a flexible estimation of the POD function and describe its uncertainty accurately.  相似文献   

2.
This paper provides a novel mechanism for identifying and estimating latent group structures in panel data using penalized techniques. We consider both linear and nonlinear models where the regression coefficients are heterogeneous across groups but homogeneous within a group and the group membership is unknown. Two approaches are considered—penalized profile likelihood (PPL) estimation for the general nonlinear models without endogenous regressors, and penalized GMM (PGMM) estimation for linear models with endogeneity. In both cases, we develop a new variant of Lasso called classifier‐Lasso (C‐Lasso) that serves to shrink individual coefficients to the unknown group‐specific coefficients. C‐Lasso achieves simultaneous classification and consistent estimation in a single step and the classification exhibits the desirable property of uniform consistency. For PPL estimation, C‐Lasso also achieves the oracle property so that group‐specific parameter estimators are asymptotically equivalent to infeasible estimators that use individual group identity information. For PGMM estimation, the oracle property of C‐Lasso is preserved in some special cases. Simulations demonstrate good finite‐sample performance of the approach in both classification and estimation. Empirical applications to both linear and nonlinear models are presented.  相似文献   

3.
Li R  Englehardt JD  Li X 《Risk analysis》2012,32(2):345-359
Multivariate probability distributions, such as may be used for mixture dose‐response assessment, are typically highly parameterized and difficult to fit to available data. However, such distributions may be useful in analyzing the large electronic data sets becoming available, such as dose‐response biomarker and genetic information. In this article, a new two‐stage computational approach is introduced for estimating multivariate distributions and addressing parameter uncertainty. The proposed first stage comprises a gradient Markov chain Monte Carlo (GMCMC) technique to find Bayesian posterior mode estimates (PMEs) of parameters, equivalent to maximum likelihood estimates (MLEs) in the absence of subjective information. In the second stage, these estimates are used to initialize a Markov chain Monte Carlo (MCMC) simulation, replacing the conventional burn‐in period to allow convergent simulation of the full joint Bayesian posterior distribution and the corresponding unconditional multivariate distribution (not conditional on uncertain parameter values). When the distribution of parameter uncertainty is such a Bayesian posterior, the unconditional distribution is termed predictive. The method is demonstrated by finding conditional and unconditional versions of the recently proposed emergent dose‐response function (DRF). Results are shown for the five‐parameter common‐mode and seven‐parameter dissimilar‐mode models, based on published data for eight benzene–toluene dose pairs. The common mode conditional DRF is obtained with a 21‐fold reduction in data requirement versus MCMC. Example common‐mode unconditional DRFs are then found using synthetic data, showing a 71% reduction in required data. The approach is further demonstrated for a PCB 126‐PCB 153 mixture. Applicability is analyzed and discussed. Matlab® computer programs are provided.  相似文献   

4.
This paper proposes an asymptotically efficient method for estimating models with conditional moment restrictions. Our estimator generalizes the maximum empirical likelihood estimator (MELE) of Qin and Lawless (1994). Using a kernel smoothing method, we efficiently incorporate the information implied by the conditional moment restrictions into our empirical likelihood‐based procedure. This yields a one‐step estimator which avoids estimating optimal instruments. Our likelihood ratio‐type statistic for parametric restrictions does not require the estimation of variance, and achieves asymptotic pivotalness implicitly. The estimation and testing procedures we propose are normalization invariant. Simulation results suggest that our new estimator works remarkably well in finite samples.  相似文献   

5.
Count data are pervasive in many areas of risk analysis; deaths, adverse health outcomes, infrastructure system failures, and traffic accidents are all recorded as count events, for example. Risk analysts often wish to estimate the probability distribution for the number of discrete events as part of doing a risk assessment. Traditional count data regression models of the type often used in risk assessment for this problem suffer from limitations due to the assumed variance structure. A more flexible model based on the Conway‐Maxwell Poisson (COM‐Poisson) distribution was recently proposed, a model that has the potential to overcome the limitations of the traditional model. However, the statistical performance of this new model has not yet been fully characterized. This article assesses the performance of a maximum likelihood estimation method for fitting the COM‐Poisson generalized linear model (GLM). The objectives of this article are to (1) characterize the parameter estimation accuracy of the MLE implementation of the COM‐Poisson GLM, and (2) estimate the prediction accuracy of the COM‐Poisson GLM using simulated data sets. The results of the study indicate that the COM‐Poisson GLM is flexible enough to model under‐, equi‐, and overdispersed data sets with different sample mean values. The results also show that the COM‐Poisson GLM yields accurate parameter estimates. The COM‐Poisson GLM provides a promising and flexible approach for performing count data regression.  相似文献   

6.
We study inference in structural models with a jump in the conditional density, where location and size of the jump are described by regression curves. Two prominent examples are auction models, where the bid density jumps from zero to a positive value at the lowest cost, and equilibrium job‐search models, where the wage density jumps from one positive level to another at the reservation wage. General inference in such models remained a long‐standing, unresolved problem, primarily due to nonregularities and computational difficulties caused by discontinuous likelihood functions. This paper develops likelihood‐based estimation and inference methods for these models, focusing on optimal (Bayes) and maximum likelihood procedures. We derive convergence rates and distribution theory, and develop Bayes and Wald inference. We show that Bayes estimators and confidence intervals are attractive both theoretically and computationally, and that Bayes confidence intervals, based on posterior quantiles, provide a valid large sample inference method.  相似文献   

7.
An asymptotically efficient likelihood‐based semiparametric estimator is derived for the censored regression (tobit) model, based on a new approach for estimating the density function of the residuals in a partially observed regression. Smoothing the self‐consistency equation for the nonparametric maximum likelihood estimator of the distribution of the residuals yields an integral equation, which in some cases can be solved explicitly. The resulting estimated density is smooth enough to be used in a practical implementation of the profile likelihood estimator, but is sufficiently close to the nonparametric maximum likelihood estimator to allow estimation of the semiparametric efficient score. The parameter estimates obtained by solving the estimated score equations are then asymptotically efficient. A summary of analogous results for truncated regression is also given.  相似文献   

8.
Mitchell J. Small 《Risk analysis》2011,31(10):1561-1575
A methodology is presented for assessing the information value of an additional dosage experiment in existing bioassay studies. The analysis demonstrates the potential reduction in the uncertainty of toxicity metrics derived from expanded studies, providing insights for future studies. Bayesian methods are used to fit alternative dose‐response models using Markov chain Monte Carlo (MCMC) simulation for parameter estimation and Bayesian model averaging (BMA) is used to compare and combine the alternative models. BMA predictions for benchmark dose (BMD) are developed, with uncertainty in these predictions used to derive the lower bound BMDL. The MCMC and BMA results provide a basis for a subsequent Monte Carlo analysis that backcasts the dosage where an additional test group would have been most beneficial in reducing the uncertainty in the BMD prediction, along with the magnitude of the expected uncertainty reduction. Uncertainty reductions are measured in terms of reduced interval widths of predicted BMD values and increases in BMDL values that occur as a result of this reduced uncertainty. The methodology is illustrated using two existing data sets for TCDD carcinogenicity, fitted with two alternative dose‐response models (logistic and quantal‐linear). The example shows that an additional dose at a relatively high value would have been most effective for reducing the uncertainty in BMA BMD estimates, with predicted reductions in the widths of uncertainty intervals of approximately 30%, and expected increases in BMDL values of 5–10%. The results demonstrate that dose selection for studies that subsequently inform dose‐response models can benefit from consideration of how these models will be fit, combined, and interpreted.  相似文献   

9.
We performed benchmark exposure (BME) calculations for particulate matter when multiple dichotomous outcome variables are involved using latent class modeling techniques and generated separate results for both the extra risk and additional risk. The use of latent class models in this study is advantageous because it combined several outcomes into just two classes (namely, a high‐risk class and a low‐risk class) and compared these two classes to obtain the BME levels. This novel approach addresses a key problem in risk estimation—namely, the multiple comparisons problem, where separate regression models are fitted for each outcome variable and the reference exposure will rely on the results of the best‐fitting model. Because of the complex nature of the estimation process, the bootstrap approach was used to estimate the reference exposure level, thereby reducing uncertainty in the obtained values. The methodology developed in this article was applied to environmental data by identifying unmeasured class membership (e.g., morbidity vs. no morbidity class) among infants in utero using observed characteristics that included low birth weight, preterm birth, and small for gestational age.  相似文献   

10.
We propose a semiparametric two‐step inference procedure for a finite‐dimensional parameter based on moment conditions constructed from high‐frequency data. The population moment conditions take the form of temporally integrated functionals of state‐variable processes that include the latent stochastic volatility process of an asset. In the first step, we nonparametrically recover the volatility path from high‐frequency asset returns. The nonparametric volatility estimator is then used to form sample moment functions in the second‐step GMM estimation, which requires the correction of a high‐order nonlinearity bias from the first step. We show that the proposed estimator is consistent and asymptotically mixed Gaussian and propose a consistent estimator for the conditional asymptotic variance. We also construct a Bierens‐type consistent specification test. These infill asymptotic results are based on a novel empirical‐process‐type theory for general integrated functionals of noisy semimartingale processes.  相似文献   

11.
The benchmark dose (BMD) approach has gained acceptance as a valuable risk assessment tool, but risk assessors still face significant challenges associated with selecting an appropriate BMD/BMDL estimate from the results of a set of acceptable dose‐response models. Current approaches do not explicitly address model uncertainty, and there is an existing need to more fully inform health risk assessors in this regard. In this study, a Bayesian model averaging (BMA) BMD estimation method taking model uncertainty into account is proposed as an alternative to current BMD estimation approaches for continuous data. Using the “hybrid” method proposed by Crump, two strategies of BMA, including both “maximum likelihood estimation based” and “Markov Chain Monte Carlo based” methods, are first applied as a demonstration to calculate model averaged BMD estimates from real continuous dose‐response data. The outcomes from the example data sets examined suggest that the BMA BMD estimates have higher reliability than the estimates from the individual models with highest posterior weight in terms of higher BMDL and smaller 90th percentile intervals. In addition, a simulation study is performed to evaluate the accuracy of the BMA BMD estimator. The results from the simulation study recommend that the BMA BMD estimates have smaller bias than the BMDs selected using other criteria. To further validate the BMA method, some technical issues, including the selection of models and the use of bootstrap methods for BMDL derivation, need further investigation over a more extensive, representative set of dose‐response data.  相似文献   

12.
The neurotoxic effects of chemical agents are often investigated in controlled studies on rodents, with binary and continuous multiple endpoints routinely collected. One goal is to conduct quantitative risk assessment to determine safe dose levels. Yu and Catalano (2005) describe a method for quantitative risk assessment for bivariate continuous outcomes by extending a univariate method of percentile regression. The model is likelihood based and allows for separate dose‐response models for each outcome while accounting for the bivariate correlation. The approach to benchmark dose (BMD) estimation is analogous to that for quantal data without having to specify arbitrary cutoff values. In this article, we evaluate the behavior of the BMD relative to background rates, sample size, level of bivariate correlation, dose‐response trend, and distributional assumptions. Using simulations, we explore the effects of these factors on the resulting BMD and BMDL distributions. In addition, we illustrate our method with data from a neurotoxicity study of parathion exposure in rats.  相似文献   

13.
在多因子HJM框架下,将一类具有特定波动率结构设定的非马尔可夫远期利率模型转化为马尔可夫模型,并将其表示成状态空间模型形式.进一步,引入基于无损卡尔曼滤波的极大似然估计法对HJM模型进行估计,解决了模型非线性和潜在状态变量的问题.实证研究中,基于上海银行间同业拆放利率(SHIBOR)期限结构的实际动态特性,对HJM模型的波动率结构进行相应的设定,并引入随机市场风险价格,构建了SHIBOR期限结构的三因子HJM模型.结果表明,三因子HJM模型可以很好的刻画SHIBOR期限结构的动态特性和波动率结构,水平因子和斜率因子是驱动SHIBOR利率系统的主要因素.  相似文献   

14.
We formulate and solve a range of dynamic models of constrained credit/insurance that allow for moral hazard and limited commitment. We compare them to full insurance and exogenously incomplete financial regimes (autarky, saving only, borrowing and lending in a single asset). We develop computational methods based on mechanism design, linear programming, and maximum likelihood to estimate, compare, and statistically test these alternative dynamic models with financial/information constraints. Our methods can use both cross‐sectional and panel data and allow for measurement error and unobserved heterogeneity. We estimate the models using data on Thai households running small businesses from two separate samples. We find that in the rural sample, the exogenously incomplete saving only and borrowing regimes provide the best fit using data on consumption, business assets, investment, and income. Family and other networks help consumption smoothing there, as in a moral hazard constrained regime. In contrast, in urban areas, we find mechanism design financial/information regimes that are decidedly less constrained, with the moral hazard model fitting best combined business and consumption data. We perform numerous robustness checks in both the Thai data and in Monte Carlo simulations and compare our maximum likelihood criterion with results from other metrics and data not used in the estimation. A prototypical counterfactual policy evaluation exercise using the estimation results is also featured.  相似文献   

15.
The econometric literature of high frequency data often relies on moment estimators which are derived from assuming local constancy of volatility and related quantities. We here study this local‐constancy approximation as a general approach to estimation in such data. We show that the technique yields asymptotic properties (consistency, normality) that are correct subject to an ex post adjustment involving asymptotic likelihood ratios. These adjustments are derived and documented. Several examples of estimation are provided: powers of volatility, leverage effect, and integrated betas. The first order approximations based on local constancy can be over the period of one observation or over blocks of successive observations. It has the advantage of gaining in transparency in defining and analyzing estimators. The theory relies heavily on the interplay between stable convergence and measure change, and on asymptotic expansions for martingales.  相似文献   

16.
This paper proposes a general approach and a computationally convenient estimation procedure for the structural analysis of auction data. Considering first‐price sealed‐bid auction models within the independent private value paradigm, we show that the underlying distribution of bidders' private values is identified from observed bids and the number of actual bidders without any parametric assumptions. Using the theory of minimax, we establish the best rate of uniform convergence at which the latent density of private values can be estimated nonparametrically from available data. We then propose a two‐step kernel‐based estimator that converges at the optimal rate.  相似文献   

17.
This paper provides a first order asymptotic theory for generalized method of moments (GMM) estimators when the number of moment conditions is allowed to increase with the sample size and the moment conditions may be weak. Examples in which these asymptotics are relevant include instrumental variable (IV) estimation with many (possibly weak or uninformed) instruments and some panel data models that cover moderate time spans and have correspondingly large numbers of instruments. Under certain regularity conditions, the GMM estimators are shown to converge in probability but not necessarily to the true parameter, and conditions for consistent GMM estimation are given. A general framework for the GMM limit distribution theory is developed based on epiconvergence methods. Some illustrations are provided, including consistent GMM estimation of a panel model with time varying individual effects, consistent limited information maximum likelihood estimation as a continuously updated GMM estimator, and consistent IV structural estimation using large numbers of weak or irrelevant instruments. Some simulations are reported.  相似文献   

18.
Obvious spatial infection patterns are often observed in cases associated with airborne transmissible diseases. Existing quantitative infection risk assessment models analyze the observed cases by assuming a homogeneous infectious particle concentration and ignore the spatial infection pattern, which may cause errors. This study aims at developing an approach to analyze spatial infection patterns associated with infectious respiratory diseases or other airborne transmissible diseases using infection risk assessment and likelihood estimation. Mathematical likelihood, based on binomial probability, was used to formulate the retrospective component with some additional mathematical treatments. Together with an infection risk assessment model that can address spatial heterogeneity, the method can be used to analyze the spatial infection pattern and retrospectively estimate the influencing parameters causing the cases, such as the infectious source strength of the pathogen. A Varicella outbreak was selected to demonstrate the use of the new approach. The infectious source strength estimated by the Wells‐Riley concept using the likelihood estimation was compared with the estimation using the existing method. It was found that the maximum likelihood estimation matches the epidemiological observation of the outbreak case much better than the estimation under the assumption of homogeneous infectious particle concentration. Influencing parameters retrospectively estimated using the new approach can be used as input parameters in quantitative infection risk assessment of the disease under other scenarios. The approach developed in this study can also serve as an epidemiological tool in outbreak investigation. Limitations and further developments are also discussed.  相似文献   

19.
中国股市超高频持续期序列长记忆性研究   总被引:1,自引:0,他引:1  
针对股市超高频持续期序列,提出了长记忆随机条件持续期模型(LMSCD),并设计了一类基于混沌禁忌遗传算法的谱似然函数模型参数估计方法,通过Monte Carlo模拟实验,验证了方法的可行性.然后,利用沪市浦发银行股票的超高频数据,分别建立了交易持续期、价格持续期和交易量持续期的长记忆随机条件持续期模型,验证了中国股票市场超高频持续期序列长记忆性的存在.  相似文献   

20.
Charles Vlek 《Risk analysis》2013,33(6):948-971
Internationally, national risk assessment (NRA) is rapidly gaining government sympathy as a science‐based approach toward prioritizing the management of national hazards and threats, with the Netherlands and the United Kingdom in leading positions since 2007. NRAs are proliferating in Europe; they are also conducted in Australia, Canada, New Zealand, and the United States, while regional RAs now exist for over 100 Dutch or British provinces or counties. Focused on the Dutch NRA (DNRA) and supported by specific examples, summaries and evaluations are given of its (1) scenario development, (2) impact assessment, (3) likelihood estimation, (4) risk diagram, and (5) capability analysis. Despite the DNRA's thorough elaboration, apparent weaknesses are lack of stakeholder involvement, possibility of false‐positive risk scenarios, rigid multicriteria impact evaluation, hybrid methods for likelihood estimation, half‐hearted use of a “probability × effect” definition of risk, forced comparison of divergent risk scenarios, and unclear decision rules for risk acceptance and safety enhancement. Such weaknesses are not unique for the DNRA. In line with a somewhat reserved encouragement by the OECD (Studies in Risk Management. Innovation in Country Risk Management. Paris: OECD, 2009), the scientific solidity of NRA results so far is questioned, and several improvements are suggested. One critical point is that expert‐driven NRAs may preempt political judgments and decisions by national security authorities. External review and validation of major NRA components is recommended for strengthening overall results as a reliable basis for national and/or regional safety policies. Meanwhile, a broader, more transactional concept of risk may lead to better national and regional risk assessments.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号