首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 93 毫秒
1.
Hickey GL  Craig PS 《Risk analysis》2012,32(7):1232-1243
A species sensitivity distribution (SSD) models data on toxicity of a specific toxicant to species in a defined assemblage. SSDs are typically assumed to be parametric, despite noteworthy criticism, with a standard proposal being the log-normal distribution. Recently, and confusingly, there have emerged different statistical methods in the ecotoxicological risk assessment literature, independent of the distributional assumption, for fitting SSDs to toxicity data with the overall aim of estimating the concentration of the toxicant that is hazardous to % of the biological assemblage (usually with small). We analyze two such estimators derived from simple linear regression applied to the ordered log-transformed toxicity data values and probit transformed rank-based plotting positions. These are compared to the more intuitive and statistically defensible confidence limit-based estimator. We conclude based on a large-scale simulation study that the latter estimator should be used in typical assessments where a pointwise value of the hazardous concentration is required.  相似文献   

2.
基于高频数据的波动率矩阵估计可有效解决传统低频估计面临的种种瓶颈问题。然而,由于受非同步和微观结构噪声等的影响,传统的高频波动率矩阵估计会产生艾普斯效应,并偏离其理论值。本文主要考虑非同步逐笔高频数据的三种同步化方法和五种传统已实现波动率矩阵的纠偏降噪方法,并从数值模拟和沪深股市的实证分析两个角度对两类方法分别展开了全面深入的比较研究。结果表明:更新时间同步化法最大程度地保留了数据信息,传统未纠偏的已实现波动率矩阵具有艾普斯效应,其偏差较大,多变量已实现核估计、双频已实现波动率矩阵估计、调整的已实现波动率矩阵估计的纠偏降噪效果较好,事先平均HY估计和HY估计相对表现较差。研究结果可为相关领域工作者进一步的研究与应用提供方法上的参考与指导。  相似文献   

3.
This paper proposes a new nested algorithm (NPL) for the estimation of a class of discrete Markov decision models and studies its statistical and computational properties. Our method is based on a representation of the solution of the dynamic programming problem in the space of conditional choice probabilities. When the NPL algorithm is initialized with consistent nonparametric estimates of conditional choice probabilities, successive iterations return a sequence of estimators of the structural parameters which we call K–stage policy iteration estimators. We show that the sequence includes as extreme cases a Hotz–Miller estimator (for K=1) and Rust's nested fixed point estimator (in the limit when K→∞). Furthermore, the asymptotic distribution of all the estimators in the sequence is the same and equal to that of the maximum likelihood estimator. We illustrate the performance of our method with several examples based on Rust's bus replacement model. Monte Carlo experiments reveal a trade–off between finite sample precision and computational cost in the sequence of policy iteration estimators.  相似文献   

4.
We develop a new specification test for IV estimators adopting a particular second order approximation of Bekker. The new specification test compares the difference of the forward (conventional) 2SLS estimator of the coefficient of the right‐hand side endogenous variable with the reverse 2SLS estimator of the same unknown parameter when the normalization is changed. Under the null hypothesis that conventional first order asymptotics provide a reliable guide to inference, the two estimates should be very similar. Our test sees whether the resulting difference in the two estimates satisfies the results of second order asymptotic theory. Essentially the same idea is applied to develop another new specification test using second‐order unbiased estimators of the type first proposed by Nagar. If the forward and reverse Nagar‐type estimators are not significantly different we recommend estimation by LIML, which we demonstrate is the optimal linear combination of the Nagar‐type estimators (to second order). We also demonstrate the high degree of similarity for k‐class estimators between the approach of Bekker and the Edgeworth expansion approach of Rothenberg. An empirical example and Monte Carlo evidence demonstrate the operation of the new specification test.  相似文献   

5.
Many environmental data sets, such as for air toxic emission factors, contain several values reported only as below detection limit. Such data sets are referred to as "censored." Typical approaches to dealing with the censored data sets include replacing censored values with arbitrary values of zero, one-half of the detection limit, or the detection limit. Here, an approach to quantification of the variability and uncertainty of censored data sets is demonstrated. Empirical bootstrap simulation is used to simulate censored bootstrap samples from the original data. Maximum likelihood estimation (MLE) is used to fit parametric probability distributions to each bootstrap sample, thereby specifying alternative estimates of the unknown population distribution of the censored data sets. Sampling distributions for uncertainty in statistics such as the mean, median, and percentile are calculated. The robustness of the method was tested by application to different degrees of censoring, sample sizes, coefficients of variation, and numbers of detection limits. Lognormal, gamma, and Weibull distributions were evaluated. The reliability of using this method to estimate the mean is evaluated by averaging the best estimated means of 20 cases for small sample size of 20. The confidence intervals for distribution percentiles estimated with bootstrap/MLE method compared favorably to results obtained with the nonparametric Kaplan-Meier method. The bootstrap/MLE method is illustrated via an application to an empirical air toxic emission factor data set.  相似文献   

6.
ARCH and GARCH models directly address the dependency of conditional second moments, and have proved particularly valuable in modelling processes where a relatively large degree of fluctuation is present. These include financial time series, which can be particularly heavy tailed. However, little is known about properties of ARCH or GARCH models in the heavy–tailed setting, and no methods are available for approximating the distributions of parameter estimators there. In this paper we show that, for heavy–tailed errors, the asymptotic distributions of quasi–maximum likelihood parameter estimators in ARCH and GARCH models are nonnormal, and are particularly difficult to estimate directly using standard parametric methods. Standard bootstrap methods also fail to produce consistent estimators. To overcome these problems we develop percentile–t, subsample bootstrap approximations to estimator distributions. Studentizing is employed to approximate scale, and the subsample bootstrap is used to estimate shape. The good performance of this approach is demonstrated both theoretically and numerically.  相似文献   

7.
A method for estimating long-term exposures from short-term measurements is validated using data from a recent EPA study of exposure to fine particles. The method was developed a decade ago but long-term exposure data to validate it did not exist until recently. In this article, exposure data from repeated visits to 37 persons over 1 year (up to 28 measurements per person) are used to test the model. Both fine particle mass and elemental concentrations measured indoors, outdoors, and on the person are examined. To provide the most stringent test of the method, only two single-day distributions are randomly selected for each element to predict the long-term distributions. The precision of the method in estimating the long-term geometric mean and geometric standard deviation appears to be of the order of 10%, with no apparent bias. The precision in estimating the 99 th percentile ranges from 19% to 48%, again without obvious bias. The precision can be improved by selecting a number of pairs of single-day distributions instead of just one pair. Occasionally, the method fails to provide an estimate for the long-term distribution. In that case, a repeat of the random selection procedure can provide an estimate. Although the method assumes a log-normal distribution, most of the distributions tested failed the chi-square test for log-normality. Therefore, the method appears suitable for application to distributions that depart from log-normality.  相似文献   

8.
In this paper a Monte Carlo sampling study consisting of four experiments is described. Two error distributions were employed, the normal and the Laplace; and two small sample sizes (20 and 40) were tested. The question of simultaneous-equation bias called for two-stage estimators. The L1, norm was employed as a means of comparing the performance of the L1 or least squares estimators. A relatively new algorithm for computing the direct least absolute (DLA) and two-stage least absolute (TSLA) estimators was employed for the experiments. The results confirmed the hypotheses that for non-normal error distributions such as the Laplace the least absolute estimators were better.  相似文献   

9.
The purpose of this note is to show how semiparametric estimators with a small bias property can be constructed. The small bias property (SBP) of a semiparametric estimator is that its bias converges to zero faster than the pointwise and integrated bias of the nonparametric estimator on which it is based. We show that semiparametric estimators based on twicing kernels have the SBP. We also show that semiparametric estimators where nonparametric kernel estimation does not affect the asymptotic variance have the SBP. In addition we discuss an interpretation of series and sieve estimators as idempotent transformations of the empirical distribution that helps explain the known result that they lead to the SBP. In Monte Carlo experiments we find that estimators with the SBP have mean‐square error that is smaller and less sensitive to bandwidth than those that do not have the SBP.  相似文献   

10.
The ill‐posedness of the nonparametric instrumental variable (NPIV) model leads to estimators that may suffer from poor statistical performance. In this paper, we explore the possibility of imposing shape restrictions to improve the performance of the NPIV estimators. We assume that the function to be estimated is monotone and consider a sieve estimator that enforces this monotonicity constraint. We define a constrained measure of ill‐posedness that is relevant for the constrained estimator and show that, under a monotone IV assumption and certain other mild regularity conditions, this measure is bounded uniformly over the dimension of the sieve space. This finding is in stark contrast to the well‐known result that the unconstrained sieve measure of ill‐posedness that is relevant for the unconstrained estimator grows to infinity with the dimension of the sieve space. Based on this result, we derive a novel non‐asymptotic error bound for the constrained estimator. The bound gives a set of data‐generating processes for which the monotonicity constraint has a particularly strong regularization effect and considerably improves the performance of the estimator. The form of the bound implies that the regularization effect can be strong even in large samples and even if the function to be estimated is steep, particularly so if the NPIV model is severely ill‐posed. Our simulation study confirms these findings and reveals the potential for large performance gains from imposing the monotonicity constraint.  相似文献   

11.
We present estimators for nonparametric functions that are nonadditive in unobservable random terms. The distributions of the unobservable random terms are assumed to be unknown. We show that when a nonadditive, nonparametric function is strictly monotone in an unobservable random term, and it satisfies some other properties that may be implied by economic theory, such as homogeneity of degree one or separability, the function and the distribution of the unobservable random term are identified. We also present convenient normalizations, to use when the properties of the function, other than strict monotonicity in the unobservable random term, are unknown. The estimators for the nonparametric function and for the distribution of the unobservable random term are shown to be consistent and asymptotically normal. We extend the results to functions that depend on a multivariate random term. The results of a limited simulation study are presented.  相似文献   

12.
In the regression‐discontinuity (RD) design, units are assigned to treatment based on whether their value of an observed covariate exceeds a known cutoff. In this design, local polynomial estimators are now routinely employed to construct confidence intervals for treatment effects. The performance of these confidence intervals in applications, however, may be seriously hampered by their sensitivity to the specific bandwidth employed. Available bandwidth selectors typically yield a “large” bandwidth, leading to data‐driven confidence intervals that may be biased, with empirical coverage well below their nominal target. We propose new theory‐based, more robust confidence interval estimators for average treatment effects at the cutoff in sharp RD, sharp kink RD, fuzzy RD, and fuzzy kink RD designs. Our proposed confidence intervals are constructed using a bias‐corrected RD estimator together with a novel standard error estimator. For practical implementation, we discuss mean squared error optimal bandwidths, which are by construction not valid for conventional confidence intervals but are valid with our robust approach, and consistent standard error estimators based on our new variance formulas. In a special case of practical interest, our procedure amounts to running a quadratic instead of a linear local regression. More generally, our results give a formal justification to simple inference procedures based on increasing the order of the local polynomial estimator employed. We find in a simulation study that our confidence intervals exhibit close‐to‐correct empirical coverage and good empirical interval length on average, remarkably improving upon the alternatives available in the literature. All results are readily available in R and STATA using our companion software packages described in Calonico, Cattaneo, and Titiunik (2014d, 2014b).  相似文献   

13.
Pandu R Tadikamalla 《Omega》1984,12(6):575-581
Several distributions have been used for approximating the lead time demand distribution in inventory systems. We compare five distributions, the normal, the logistic, the lognormal, the gamma and the Weibull for obtaining the expected number of back orders, the reorder levels to have a given protection and the optimal order quantity, reorder levels in continuous review models of (Q, r) type. The normal and the logistic distributions are inadequate to represent the situations where the coefficient of variation (the ratio of the standard deviation to the mean) of the lead time demand distribution is large. The lognormal, the gamma and the Weibull distributions are versatile and adequate; however the lognormal seems to be a viable candidate because of its computational simplicity.  相似文献   

14.
Motivated by interest in making delay announcements in service systems, we study real‐time delay estimators in many‐server service systems, both with and without customer abandonment. Our main contribution here is to consider the realistic feature of time‐varying arrival rates. We focus especially on delay estimators exploiting recent customer delay history. We show that time‐varying arrival rates can introduce significant estimation bias in delay‐history‐based delay estimators when the system experiences alternating periods of overload and underload. We then introduce refined delay‐history estimators that effectively cope with time‐varying arrival rates together with non‐exponential service‐time and abandonment‐time distributions, which are often observed in practice. We use computer simulation to verify that our proposed estimators outperform several natural alternatives.  相似文献   

15.
In this paper we propose a new estimator for a model with one endogenous regressor and many instrumental variables. Our motivation comes from the recent literature on the poor properties of standard instrumental variables estimators when the instrumental variables are weakly correlated with the endogenous regressor. Our proposed estimator puts a random coefficients structure on the relation between the endogenous regressor and the instruments. The variance of the random coefficients is modelled as an unknown parameter. In addition to proposing a new estimator, our analysis yields new insights into the properties of the standard two‐stage least squares (TSLS) and limited‐information maximum likelihood (LIML) estimators in the case with many weak instruments. We show that in some interesting cases, TSLS and LIML can be approximated by maximizing the random effects likelihood subject to particular constraints. We show that statistics based on comparisons of the unconstrained estimates of these parameters to the implicit TSLS and LIML restrictions can be used to identify settings when standard large sample approximations to the distributions of TSLS and LIML are likely to perform poorly. We also show that with many weak instruments, LIML confidence intervals are likely to have under‐coverage, even though its finite sample distribution is approximately centered at the true value of the parameter. In an application with real data and simulations around this data set, the proposed estimator performs markedly better than TSLS and LIML, both in terms of coverage rate and in terms of risk.  相似文献   

16.
The dose to human and nonhuman individuals inflicted by anthropogenic radiation is an important issue in international and domestic policy. The current paradigm for nonhuman populations asserts that if the dose to the maximally exposed individuals in a population is below a certain criterion (e.g., <10 mGy d(-1)) then the population is adequately protected. Currently, there is no consensus in the regulatory community as to the best statistical approach. Statistics, currently considered, include the maximum likelihood estimator for the 95th percentile of the sample mean and the sample maximum. Recently, the investigators have proposed the use of the maximum likelihood estimate of a very high quantile as an estimate of dose to the maximally exposed individual. In this study, we compare all of the above-mentioned statistics to an estimate based on extreme value theory. To determine and compare the bias and variance of these statistics, we use Monte Carlo simulation techniques, in a procedure similar to a parametric bootstrap. Our results show that a statistic based on extreme value theory has the least bias of those considered here, but requires reliable estimates of the population size. We recommend establishing the criterion based on what would be considered acceptable if only a small percentage of the population exceeded the limit, and hence recommend using the maximum likelihood estimator of a high quantile in the case that reliable estimates of the population size are not available.  相似文献   

17.
Matching estimators are widely used in empirical economics for the evaluation of programs or treatments. Researchers using matching methods often apply the bootstrap to calculate the standard errors. However, no formal justification has been provided for the use of the bootstrap in this setting. In this article, we show that the standard bootstrap is, in general, not valid for matching estimators, even in the simple case with a single continuous covariate where the estimator is root‐N consistent and asymptotically normally distributed with zero asymptotic bias. Valid inferential methods in this setting are the analytic asymptotic variance estimator of Abadie and Imbens (2006a) as well as certain modifications of the standard bootstrap, like the subsampling methods in Politis and Romano (1994).  相似文献   

18.
Adaptive Spatial Sampling of Contaminated Soil   总被引:1,自引:0,他引:1  
Cox  Louis Anthony 《Risk analysis》1999,19(6):1059-1069

Suppose that a residential neighborhood may have been contaminated by a nearby abandoned hazardous waste site. The suspected contamination consists of elevated soil concentrations of chemicals that are also found in the absence of site-related contamination. How should a risk manager decide which residential properties to sample and which ones to clean? This paper introduces an adaptive spatial sampling approach which uses initial observations to guide subsequent search. Unlike some recent model-based spatial data analysis methods, it does not require any specific statistical model for the spatial distribution of hazards, but instead constructs an increasingly accurate nonparametric approximation to it as sampling proceeds. Possible cost-effective sampling and cleanup decision rules are described by decision parameters such as the number of randomly selected locations used to initialize the process, the number of highest-concentration locations searched around, the number of samples taken at each location, a stopping rule, and a remediation action threshold. These decision parameters are optimized by simulating the performance of each decision rule. The simulation is performed using the data collected so far to impute multiple probable values of unknown soil concentration distributions during each simulation run. This optimized adaptive spatial sampling technique has been applied to real data using error probabilities for wrongly cleaning or wrongly failing to clean each location (compared to the action that would be taken if perfect information were available) as evaluation criteria. It provides a practical approach for quantifying trade-offs between these different types of errors and expected cost. It also identifies strategies that are undominated with respect to all of these criteria.

  相似文献   

19.
In certain auction, search, and related models, the boundary of the support of the observed data depends on some of the parameters of interest. For such nonregular models, standard asymptotic distribution theory does not apply. Previous work has focused on characterizing the nonstandard limiting distributions of particular estimators in these models. In contrast, we study the problem of constructing efficient point estimators. We show that the maximum likelihood estimator is generally inefficient, but that the Bayes estimator is efficient according to the local asymptotic minmax criterion for conventional loss functions. We provide intuition for this result using Le Cam's limits of experiments framework.  相似文献   

20.
估计带跳资产价格的时点波动时,需要用门限过滤方法消除跳的影响。在有限样本下,门限过滤会产生错滤偏误和漏虑偏误,降低估计精度。跳错滤产生的偏误可通过对错滤样本进行补足的方法进行纠偏,但由于发生时点未知,跳漏滤产生的偏误无法纠正,只能通过估计量设计来减少漏滤偏误。本文首次提出基于门限双幂变差的时点波动估计量,采用核平滑方法对资产价格时点波动进行非参数估计,有效减少跳错滤导致的偏误。采用随机阵列极限理论,本文证明了估计量的一致性和渐进正态性,在分析有限样本偏误的基础上,给出估计量的纠偏方法。蒙特卡洛模拟表明,本文给出的估计量,漏滤偏误明显小于基于二次变差构造的估计量,对时点波动估计的性质具有实质改进。采用Kupiec动态VaR精度检验对沪深300指数的实证分析表明,本文给出的时点波动估计更能描述资产收益的波动特征。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号