首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 625 毫秒
1.
We show by counterexample that Proposition 2 in Fernández‐Villaverde, Rubio‐Ramírez, and Santos (Econometrica (2006), 74, 93–119) is false. We also show that even if their Proposition 2 were corrected, it would be irrelevant for parameter estimates. As a more constructive contribution, we consider the effects of approximation error on parameter estimation, and conclude that second order approximation errors in the policy function have at most second order effects on parameter estimates.  相似文献   

2.
We propose a novel methodology for evaluating the accuracy of numerical solutions to dynamic economic models. It consists in constructing a lower bound on the size of approximation errors. A small lower bound on errors is a necessary condition for accuracy: If a lower error bound is unacceptably large, then the actual approximation errors are even larger, and hence, the approximation is inaccurate. Our lower‐bound error analysis is complementary to the conventional upper‐error (worst‐case) bound analysis, which provides a sufficient condition for accuracy. As an illustration of our methodology, we assess approximation in the first‐ and second‐order perturbation solutions for two stylized models: a neoclassical growth model and a new Keynesian model. The errors are small for the former model but unacceptably large for the latter model under some empirically relevant parameterizations.  相似文献   

3.
Quantile regression (QR) fits a linear model for conditional quantiles just as ordinary least squares (OLS) fits a linear model for conditional means. An attractive feature of OLS is that it gives the minimum mean‐squared error linear approximation to the conditional expectation function even when the linear model is misspecified. Empirical research using quantile regression with discrete covariates suggests that QR may have a similar property, but the exact nature of the linear approximation has remained elusive. In this paper, we show that QR minimizes a weighted mean‐squared error loss function for specification error. The weighting function is an average density of the dependent variable near the true conditional quantile. The weighted least squares interpretation of QR is used to derive an omitted variables bias formula and a partial quantile regression concept, similar to the relationship between partial regression and OLS. We also present asymptotic theory for the QR process under misspecification of the conditional quantile function. The approximation properties of QR are illustrated using wage data from the U.S. census. These results point to major changes in inequality from 1990 to 2000.  相似文献   

4.
This article introduces a human error analysis or human reliability analysis methodology, AGAPE-ET (A Guidance And Procedure for Human Error Analysis for Emergency Tasks), for analyzing emergency tasks in nuclear power plants. The AGAPE-ET method is based on a simplified cognitive model and a set of performance-influencing factors (PIFs). At each cognitive function, error-causing factors (ECF) or error-likely situations have been identified considering the characteristics of the performance of each cognitive function and the influencing mechanism of the PIFs on the cognitive function. Then, a human error analysis procedure based on the error analysis factors is organized to cue or guide the analyst in conducting the human error analysis. The method can be characterized by the structured identification of weak points of the task required to be performed and by the efficient analysis process such that the analyst has only to carry out the analysis with the necessary cognitive functions. Through the application, AGAPE-ET showed its usefulness, which effectively identifies the vulnerabilities with respect to cognitive performance as well as task execution, and that helps the analyst directly draw specific error reduction measures through the analysis.  相似文献   

5.
This paper is concerned with accuracy properties of simulations of approximate solutions for stochastic dynamic models. Our analysis rests upon a continuity property of invariant distributions and a generalized law of large numbers. We then show that the statistics generated by any sufficiently good numerical approximation are arbitrarily close to the set of expected values of the model's invariant distributions. Also, under a contractivity condition on the dynamics, we establish error bounds. These results are of further interest for the comparative study of stationary solutions and the estimation of structural dynamic models.  相似文献   

6.
在高频数据条件下,中国ETF基金价格"已实现"波动率与跟踪误差之间是否存在着因果关系并存在着信息的先导效应?基于"已实现"波动、跟踪误差计算方法及Granger因果检验过程、VAR模型等,本文对此进行了深入研究。研究结果认为:中国ETF基金价格"已实现"波动率与两种跟踪误差分别具有Granger因果关系,后者是前者的Granger原因;中国ETF基金价格"已实现"波动率序列与两种跟踪误差序列的同期及一、二阶滞后相关性较高,而跟踪误差滞后于"已实现"波动率;当ETF基金的跟踪误差受外部市场条件的某一冲击后,将给ETF基金价格"已实现"波动率带来同向的冲击,这一冲击具有一定的持续性和滞后性。  相似文献   

7.
This paper studies the problem of identification and estimation in nonparametric regression models with a misclassified binary regressor where the measurement error may be correlated with the regressors. We show that the regression function is nonparametrically identified in the presence of an additional random variable that is correlated with the unobserved true underlying variable but unrelated to the measurement error. Identification for semiparametric and parametric regression functions follows straightforwardly from the basic identification result. We propose a kernel estimator based on the identification strategy, derive its large sample properties, and discuss alternative estimation procedures. We also propose a test for misclassification in the model based on an exclusion restriction that is straightforward to implement.  相似文献   

8.
Conventional spirometry produces measurement error by using repeatability criteria (RC) to discard acceptable data and terminating tests early when RC are met. These practices also implicitly assume that there is no variation across maneuvers within each test. This has implications for air pollution regulations that rely on pulmonary function tests to determine adverse effects or set standards. We perform a Monte Carlo simulation of 20,902 tests of forced expiratory volume in 1 second (FEV1), each with eight maneuvers, for an individual with empirically obtained, plausibly normal pulmonary function. Default coefficients of variation for inter‐ and intratest variability (3% and 6%, respectively) are employed. Measurement error is defined as the difference between results from the conventional protocol and an unconstrained, eight‐maneuver alternative. In the default model, average measurement error is shown to be ~5%. The minimum difference necessary for statistical significance at p < 0.05 for a before/after comparison is shown to be 16%. Meanwhile, the U.S. Environmental Protection Agency has deemed single‐digit percentage decrements in FEV1 sufficient to justify more stringent national ambient air quality standards. Sensitivity analysis reveals that results are insensitive to intertest variability but highly sensitive to intratest variability. Halving the latter to 3% reduces measurement error by 55%. Increasing it to 9% or 12% increases measurement error by 65% or 125%, respectively. Within‐day FEV1 differences ≤5% among normal subjects are believed to be clinically insignificant. Therefore, many differences reported as statistically significant are likely to be artifactual. Reliable data are needed to estimate intratest variability for the general population, subpopulations of interest, and research samples. Sensitive subpopulations (e.g., chronic obstructive pulmonary disease or COPD patients, asthmatics, children) are likely to have higher intratest variability, making it more difficult to derive valid statistical inferences about differences observed after treatment or exposure.  相似文献   

9.
A Monte Carlo method is presented to study the effect of systematic and random errors on computer models mainly dealing with experimental data. It is a common assumption in this type of models (linear and nonlinear regression, and nonregression computer models) involving experimental measurements that the error sources are mainly random and independent with no constant background errors (systematic errors). However, from comparisons of different experimental data sources evidence is often found of significant bias or calibration errors. The uncertainty analysis approach presented in this work is based on the analysis of cumulative probability distributions for output variables of the models involved taking into account the effect of both types of errors. The probability distributions are obtained by performing Monte Carlo simulation coupled with appropriate definitions for the random and systematic errors. The main objectives are to detect the error source with stochastic dominance on the uncertainty propagation and the combined effect on output variables of the models. The results from the case studies analyzed show that the approach is able to distinguish which error type has a more significant effect on the performance of the model. Also, it was found that systematic or calibration errors, if present, cannot be neglected in uncertainty analysis of models dependent on experimental measurements such as chemical and physical properties. The approach can be used to facilitate decision making in fields related to safety factors selection, modeling, experimental data measurement, and experimental design.  相似文献   

10.
Cakmak  Sabit  Burnett  Richard T.  Krewski  Daniel 《Risk analysis》1999,19(3):487-496
The association between daily fluctuations in ambient particulate matter and daily variations in nonaccidental mortality have been extensively investigated. Although it is now widely recognized that such an association exists, the form of the concentration–response model is still in question. Linear, no threshold and linear threshold models have been most commonly examined. In this paper we considered methods to detect and estimate threshold concentrations using time series data of daily mortality rates and air pollution concentrations. Because exposure is measured with error, we also considered the influence of measurement error in distinguishing between these two completing model specifications. The methods were illustrated on a 15-year daily time series of nonaccidental mortality and particulate air pollution data in Toronto, Canada. Nonparametric smoothed representations of the association between mortality and air pollution were adequate to graphically distinguish between these two forms. Weighted nonlinear regression methods for relative risk models were adequate to give nearly unbiased estimates of threshold concentrations even under conditions of extreme exposure measurement error. The uncertainty in the threshold estimates increased with the degree of exposure error. Regression models incorporating threshold concentrations could be clearly distinguished from linear relative risk models in the presence of exposure measurement error. The assumption of a linear model given that a threshold model was the correct form usually resulted in overestimates in the number of averted premature deaths, except for low threshold concentrations and large measurement error.  相似文献   

11.
A positive association between rework and safety events that arise during the construction process has been identified. In-depth semi-structured interviews with operational and project-related employees from an Australian construction organisation were undertaken to determine the precursors to rework and safety events. The analysis enabled the precursors of error to examined under the auspices of: (1) People, (2) Organisation, and (3) Project. It is revealed that the precursors to error for rework and safety incidents were similar. A conceptual framework to simultaneously reduce rework and safety incidents is proposed. It is acknowledged that there is no panacea that can be used to prevent rework from occurring, but from the findings presented indicate that a shift from a position of ‘preventing’ to ‘managing’ errors is required to enable learning to become an embedded feature of an organisation’s culture. As a consequence, this will contribute to productivity and performance improvements being realised.  相似文献   

12.
针对同时包含定性指标、定量可线性补偿和定量不可线性补偿指标的综合评价问题,本文提出一种新型的消错决策方法。在错误、错误值、错误函数等消错学概念的基础上,提出了极限损失值、重要指标、关键指标、冗余指标等概念,给出不同形式的错误函数。通过错误值和极限错误损失值求取错误损失值,并以此为基础进行综合评价。污水处理厂建设方案评价的实际研究结果表明错误综合评价法能够有效的对定性指标进行量化、能够同时处理包含可线性补偿和不可线性补偿的定量指标。  相似文献   

13.
It is well-known that the multiple knapsack problem is NP-hard, and does not admit an FPTAS even for the case of two identical knapsacks. Whereas the 0-1 knapsack problem with only one knapsack has been intensively studied, and some effective exact or approximation algorithms exist. A natural approach for the multiple knapsack problem is to pack the knapsacks successively by using an effective algorithm for the 0-1 knapsack problem. This paper considers such an approximation algorithm that packs the knapsacks in the nondecreasing order of their capacities. We analyze this algorithm for 2 and 3 knapsack problems by the worst-case analysis method and give all their error bounds.  相似文献   

14.
基金管理者报酬的线性模型研究与实证分析   总被引:1,自引:1,他引:0  
探讨了四种跟踪组合回报率与目标回报率间偏差的线性模型 ,线性偏差比传统的二次型偏差更能准确描述投资者的风险态度 ,用线性规划给出了明确的优化方案 ,并对上海证券 A股各分类资产组合作出了实证分析和比较 ,得出为达不同的投资目标投资者确定投资组合及基金管理者报酬的各种优化模型 .  相似文献   

15.
以测量误差的分布理论为基础,本文将微观结构噪声的影响引入到测量误差的方差中,构建了包含微观结构噪声影响的HARQ-N模型。使用蒙特卡洛模拟与中国股市的高频数据对HAR、HARQ、HARQ-N模型与HAR-RV-N-CJ模型的估计和预测进行了比较,研究发现,HARQ模型和HARQ-N模型的测量误差修正项对波动率的影响系数统计显著为负,HARQ-N模型的测量误差项影响系数远大于HARQ模型,更大程度地减弱当期微观结构噪声和测量误差的影响。并且,考虑微观结构噪声和测量误差的HARQ-N模型样本内和样本外预测效果在统计上显著优于HAR模型、HARQ模型与HAR-RV-N-CJ模型。  相似文献   

16.
在股票市场中,准确的股票收益率预测是市场交易各方共同关心的重要问题。由于影响股票市场的因素十分复杂,仅靠建立单一的股票收益率预测模型来提高预测精度是非常困难的。本文对当前股票收益率预测方法存在的不足进行了阐述,并提出了以误差校正来提高股票收益率预测精度的新思路。首先,利用训练样本构建灰色神经网络模型,然后对股票收益率进行初步预测;其次,引入EGRACH模型来挖掘和分析预测误差序列的内部信息,并对该序列后续点进行预测;最后,利用误差预测结果对股票收益率的初始预测值进行校正。文章以上证综合指数数据为例进行分析,结果显示,与校正前的预测精度相比,校正后的预测精度提高了9.3%,表明EGRACH的误差校正过程是有效的,也验证了该方法的可行性。  相似文献   

17.
The appearance of measurement error in exposure and risk factor data potentially affects any inferences regarding variability and uncertainty because the distribution representing the observed data set deviates from the distribution that represents an error-free data set. A methodology for improving the characterization of variability and uncertainty with known measurement errors in data is demonstrated in this article based on an observed data set, known measurement error, and a measurement-error model. A practical method for constructing an error-free data set is presented and a numerical method based upon bootstrap pairs, incorporating two-dimensional Monte Carlo simulation, is introduced to address uncertainty arising from measurement error in selected statistics. When measurement error is a large source of uncertainty, substantial differences between the distribution representing variability of the observed data set and the distribution representing variability of the error-free data set will occur. Furthermore, the shape and range of the probability bands for uncertainty differ between the observed and error-free data set. Failure to separately characterize contributions from random sampling error and measurement error will lead to bias in the variability and uncertainty estimates. However, a key finding is that total uncertainty in mean can be properly quantified even if measurement and random sampling errors cannot be separated. An empirical case study is used to illustrate the application of the methodology.  相似文献   

18.
Yu  Fan-Jang  Hwang  Sheue-Ling  Huang  Yu-Hao 《Risk analysis》1999,19(3):401-415
In the design, development, and manufacturing stage of industrial products, engineers usually focus on the problems caused by hardware or software, but pay less attention to problems caused by human error, which may significantly affect system reliability and safety. Although operating procedures are strictly followed, human error still may occur occasionally. Among the influencing factors, the inappropriate design of standard operation procedure (SOP) or standard assembly procedure (SAP) is an important and latent reason for unexpected results found during human operation. To reduce the error probability and error effects of these unexpected behaviors in the industrial work process, overall evaluation of SOP or SAP quality has become an essential task. The human error criticality analysis (HECA) method was developed to identify the potentially critical problems caused by human error in the human operation system. This method performs task analysis on the basis of operation procedure. For example, SOP, analyzes the human error probability (HEP) for each human operation step, and assesses its error effects to the whole system. The results of the analysis will show the interrelationship that exists between critical human tasks, critical human error modes, and human reliability information of the human operation system. To identify the robustness of the model, a case study of initiator assembly tasks was conducted. Results show that the HECA method is practicable in evaluating the operation procedure, and the information is valuable in identifying the means to upgrade human reliability and system safety for human tasks.  相似文献   

19.
As an imperative channel for fast information propagation, online social networks (OSNs) also have their defects. One of them is the information leakage, i.e., information could be spread via OSNs to the users whom we are not willing to share with. Thus the problem of constructing a circle of trust to share information with as many friends as possible without further spreading it to unwanted targets has become a challenging research topic but still remained open. Our work is the first attempt to study the Maximum Circle of Trust problem seeking to share the information with the maximum expected number of poster’s friends such that the information spread to the unwanted targets is brought to its knees. First, we consider a special and more practical case with the two-hop information propagation and a single unwanted target. In this case, we show that this problem is NP-hard, which denies the existence of an exact polynomial-time algorithm. We thus propose a Fully Polynomial-Time Approximation Scheme (FPTAS), which can not only adjust any allowable performance error bound but also run in polynomial time with both the input size and allowed error. FPTAS is the best approximation solution one can ever wish for an NP-hard problem. We next consider the number of unwanted targets is bounded and prove that there does not exist an FPTAS in this case. Instead, we design a Polynomial-Time Approximation Scheme (PTAS) in which the allowable error can also be controlled. When the number of unwanted targets are not bounded, we provide a randomized algorithm, along with the analytical theoretical bound and inapproximaibility result. Finally, we consider a general case with many hops information propagation and further show its #P-hardness and propose an effective Iterative Circle of Trust Detection (ICTD) algorithm based on a novel greedy function. An extensive experiment on various real-world OSNs has validated the effectiveness of our proposed approximation and ICTD algorithms. Such an extensive experiment also highlights several important observations on information leakage which help to sharpen the security of OSNs in the future.  相似文献   

20.
弱费雪效应和名义利率粘性是货币政策有效的前提。本文使用傅里叶变换处理实际利率的时变性,扩展协整模型用以考察长期的费雪效应,并建立门限误差修正模型区分长期和短期的费雪效应,刻画名义利率短期的动态调整特征。基于我国1990年1月至2017年12月的月度数据研究发现:(1)我国名义利率和通货膨胀之间存在长期的弱费雪效应;(2)名义利率的短期动态调整特征存在显著的双重门限效应,在名义利率过度高于均衡值时会出现显著而快速的调整,而当名义利率低于均衡值或处于中间机制时,均没有发现显著的调整,即名义利率存在粘性。研究结果表明:当前阶段数量型货币政策在我国依然有效,因而存在综合使用数量型货币政策和价格型货币政策的空间。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号