首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 328 毫秒
1.
This paper introduces a novel bootstrap procedure to perform inference in a wide class of partially identified econometric models. We consider econometric models defined by finitely many weak moment inequalities, 2 We can also admit models defined by moment equalities by combining pairs of weak moment inequalities.
which encompass many applications of economic interest. The objective of our inferential procedure is to cover the identified set with a prespecified probability. 3 We deal with the objective of covering each element of the identified set with a prespecified probability in Bugni (2010a).
We compare our bootstrap procedure, a competing asymptotic approximation, and subsampling procedures in terms of the rate at which they achieve the desired coverage level, also known as the error in the coverage probability. Under certain conditions, we show that our bootstrap procedure and the asymptotic approximation have the same order of error in the coverage probability, which is smaller than that obtained by using subsampling. This implies that inference based on our bootstrap and asymptotic approximation should eventually be more precise than inference based on subsampling. A Monte Carlo study confirms this finding in a small sample simulation.  相似文献   

2.
Important estimation problems in econometrics like estimating the value of a spectral density at frequency zero, which appears in the econometrics literature in the guises of heteroskedasticity and autocorrelation consistent variance estimation and long run variance estimation, are shown to be “ill‐posed” estimation problems. A prototypical result obtained in the paper is that the minimax risk for estimating the value of the spectral density at frequency zero is infinite regardless of sample size, and that confidence sets are close to being uninformative. In this result the maximum risk is over commonly used specifications for the set of feasible data generating processes. The consequences for inference on unit roots and cointegration are discussed. Similar results for persistence estimation and estimation of the long memory parameter are given. All these results are obtained as special cases of a more general theory developed for abstract estimation problems, which readily also allows for the treatment of other ill‐posed estimation problems such as, e.g., nonparametric regression or density estimation.  相似文献   

3.
We examine challenges to estimation and inference when the objects of interest are nondifferentiable functionals of the underlying data distribution. This situation arises in a number of applications of bounds analysis and moment inequality models, and in recent work on estimating optimal dynamic treatment regimes. Drawing on earlier work relating differentiability to the existence of unbiased and regular estimators, we show that if the target object is not differentiable in the parameters of the data distribution, there exist no estimator sequences that are locally asymptotically unbiased or α‐quantile unbiased. This places strong limits on estimators, bias correction methods, and inference procedures, and provides motivation for considering other criteria for evaluating estimators and inference procedures, such as local asymptotic minimaxity and one‐sided quantile unbiasedness.  相似文献   

4.
A Survey of Approaches for Assessing and Managing the Risk of Extremes   总被引:8,自引:0,他引:8  
In this paper, we review methods for assessing and managing the risk of extreme events, where extreme events are defined to be rare, severe, and outside the normal range of experience of the system in question. First, we discuss several systematic approaches for identifying possible extreme events. We then discuss some issues related to risk assessment of extreme events, including what type of output is needed (e.g., a single probability vs. a probability distribution), and alternatives to the probabilistic approach. Next, we present a number of probabilistic methods. These include: guidelines for eliciting informative probability distributions from experts; maximum entropy distributions; extreme value theory; other approaches for constructing prior distributions (such as reference or noninformative priors); the use of modeling and decomposition to estimate the probability (or distribution) of interest; and bounding methods. Finally, we briefly discuss several approaches for managing the risk of extreme events, and conclude with recommendations and directions for future research.  相似文献   

5.
Typically, full Bayesian estimation of correlated event rates can be computationally challenging since estimators are intractable. When estimation of event rates represents one activity within a larger modeling process, there is an incentive to develop more efficient inference than provided by a full Bayesian model. We develop a new subjective inference method for correlated event rates based on a Bayes linear Bayes model under the assumption that events are generated from a homogeneous Poisson process. To reduce the elicitation burden we introduce homogenization factors to the model and, as an alternative to a subjective prior, an empirical method using the method of moments is developed. Inference under the new method is compared against estimates obtained under a full Bayesian model, which takes a multivariate gamma prior, where the predictive and posterior distributions are derived in terms of well‐known functions. The mathematical properties of both models are presented. A simulation study shows that the Bayes linear Bayes inference method and the full Bayesian model provide equally reliable estimates. An illustrative example, motivated by a problem of estimating correlated event rates across different users in a simple supply chain, shows how ignoring the correlation leads to biased estimation of event rates.  相似文献   

6.
Many environmental data sets, such as for air toxic emission factors, contain several values reported only as below detection limit. Such data sets are referred to as "censored." Typical approaches to dealing with the censored data sets include replacing censored values with arbitrary values of zero, one-half of the detection limit, or the detection limit. Here, an approach to quantification of the variability and uncertainty of censored data sets is demonstrated. Empirical bootstrap simulation is used to simulate censored bootstrap samples from the original data. Maximum likelihood estimation (MLE) is used to fit parametric probability distributions to each bootstrap sample, thereby specifying alternative estimates of the unknown population distribution of the censored data sets. Sampling distributions for uncertainty in statistics such as the mean, median, and percentile are calculated. The robustness of the method was tested by application to different degrees of censoring, sample sizes, coefficients of variation, and numbers of detection limits. Lognormal, gamma, and Weibull distributions were evaluated. The reliability of using this method to estimate the mean is evaluated by averaging the best estimated means of 20 cases for small sample size of 20. The confidence intervals for distribution percentiles estimated with bootstrap/MLE method compared favorably to results obtained with the nonparametric Kaplan-Meier method. The bootstrap/MLE method is illustrated via an application to an empirical air toxic emission factor data set.  相似文献   

7.
Local to unity limit theory is used in applications to construct confidence intervals (CIs) for autoregressive roots through inversion of a unit root test (Stock (1991)). Such CIs are asymptotically valid when the true model has an autoregressive root that is local to unity (ρ = 1 + c/n), but are shown here to be invalid at the limits of the domain of definition of the localizing coefficient c because of a failure in tightness and the escape of probability mass. Failure at the boundary implies that these CIs have zero asymptotic coverage probability in the stationary case and vicinities of unity that are wider than O(n−1/3). The inversion methods of Hansen (1999) and Mikusheva (2007) are asymptotically valid in such cases. Implications of these results for predictive regression tests are explored. When the predictive regressor is stationary, the popular Campbell and Yogo (2006) CIs for the regression coefficient have zero coverage probability asymptotically, and their predictive test statistic Q erroneously indicates predictability with probability approaching unity when the null of no predictability holds. These results have obvious cautionary implications for the use of the procedures in empirical practice.  相似文献   

8.
If the food sector is attacked, the likely agents will be chemical, biological, or radionuclear (CBRN). We compiled a database of international terrorist/criminal activity involving such agents. Based on these data, we calculate the likelihood of a catastrophic event using extreme value methods. At the present, the probability of an event leading to 5,000 casualties (fatalities and injuries) is between 0.1 and 0.3. However, pronounced, nonstationary patterns within our data suggest that the "reoccurrence period" for such attacks is decreasing every year. Similarly, disturbing trends are evident in a broader data set, which is nonspecific as to the methods or means of attack. While at the present the likelihood of CBRN events is quite low, given an attack, the probability that it involves CBRN agents increases with the number of casualties. This is consistent with evidence of "heavy tails" in the distribution of casualties arising from CBRN events.  相似文献   

9.
We consider nonparametric estimation of a regression function that is identified by requiring a specified quantile of the regression “error” conditional on an instrumental variable to be zero. The resulting estimating equation is a nonlinear integral equation of the first kind, which generates an ill‐posed inverse problem. The integral operator and distribution of the instrumental variable are unknown and must be estimated nonparametrically. We show that the estimator is mean‐square consistent, derive its rate of convergence in probability, and give conditions under which this rate is optimal in a minimax sense. The results of Monte Carlo experiments show that the estimator behaves well in finite samples.  相似文献   

10.
Differences in the conceptual frameworks of scientists and nonscientists may create barriers to risk communication. This article examines two such conceptual problems. First, the logic of "direct inference" from group statistics to probabilities about specific individuals suggests that individuals might be acting rationally in refusing to apply to themselves the conclusions of regulatory risk assessments. Second, while regulators and risk assessment scientists often use an "objectivist" or "relative frequency" interpretation of probability statements, members of the public are more likely to adopt a "subjectivist" or "degree of confidence" interpretation when estimating their personal risks, and either misunderstand or significantly discount the relevance of risk assessment conclusions. If these analyses of inference and probability are correct, there may be a conceptual gulf at the center of risk communication that cannot be bridged by additional data about the magnitude of group risk. Suggestions are made for empirical studies that might help regulators deal with this conceptual gulf.  相似文献   

11.
We show by example that empirical likelihood and other commonly used tests for moment restrictions are unable to control the (exponential) rate at which the probability of a Type I error tends to zero unless the possible distributions for the observed data are restricted appropriately. From this, it follows that for the optimality claim for empirical likelihood in Kitamura (2001) to hold, additional assumptions and qualifications are required. Under stronger assumptions than those in Kitamura (2001), we establish the following optimality result: (i) empirical likelihood controls the rate at which the probability of a Type I error tends to zero and (ii) among all procedures for which the probability of a Type I error tends to zero at least as fast, empirical likelihood maximizes the rate at which the probability of a Type II error tends to zero for most alternatives. This result further implies that empirical likelihood maximizes the rate at which the probability of a Type II error tends to zero for all alternatives among a class of tests that satisfy a weaker criterion for their Type I error probabilities.  相似文献   

12.
In this paper, we provide efficient estimators and honest confidence bands for a variety of treatment effects including local average (LATE) and local quantile treatment effects (LQTE) in data‐rich environments. We can handle very many control variables, endogenous receipt of treatment, heterogeneous treatment effects, and function‐valued outcomes. Our framework covers the special case of exogenous receipt of treatment, either conditional on controls or unconditionally as in randomized control trials. In the latter case, our approach produces efficient estimators and honest bands for (functional) average treatment effects (ATE) and quantile treatment effects (QTE). To make informative inference possible, we assume that key reduced‐form predictive relationships are approximately sparse. This assumption allows the use of regularization and selection methods to estimate those relations, and we provide methods for post‐regularization and post‐selection inference that are uniformly valid (honest) across a wide range of models. We show that a key ingredient enabling honest inference is the use of orthogonal or doubly robust moment conditions in estimating certain reduced‐form functional parameters. We illustrate the use of the proposed methods with an application to estimating the effect of 401(k) eligibility and participation on accumulated assets. The results on program evaluation are obtained as a consequence of more general results on honest inference in a general moment‐condition framework, which arises from structural equation models in econometrics. Here, too, the crucial ingredient is the use of orthogonal moment conditions, which can be constructed from the initial moment conditions. We provide results on honest inference for (function‐valued) parameters within this general framework where any high‐quality, machine learning methods (e.g., boosted trees, deep neural networks, random forest, and their aggregated and hybrid versions) can be used to learn the nonparametric/high‐dimensional components of the model. These include a number of supporting auxiliary results that are of major independent interest: namely, we (1) prove uniform validity of a multiplier bootstrap, (2) offer a uniformly valid functional delta method, and (3) provide results for sparsity‐based estimation of regression functions for function‐valued outcomes.  相似文献   

13.
Quantitative risk analysis (QRA) is a systematic approach for evaluating likelihood, consequences, and risk of adverse events. QRA based on event (ETA) and fault tree analyses (FTA) employs two basic assumptions. The first assumption is related to likelihood values of input events, and the second assumption is regarding interdependence among the events (for ETA) or basic events (for FTA). Traditionally, FTA and ETA both use crisp probabilities; however, to deal with uncertainties, the probability distributions of input event likelihoods are assumed. These probability distributions are often hard to come by and even if available, they are subject to incompleteness (partial ignorance) and imprecision. Furthermore, both FTA and ETA assume that events (or basic events) are independent. In practice, these two assumptions are often unrealistic. This article focuses on handling uncertainty in a QRA framework of a process system. Fuzzy set theory and evidence theory are used to describe the uncertainties in the input event likelihoods. A method based on a dependency coefficient is used to express interdependencies of events (or basic events) in ETA and FTA. To demonstrate the approach, two case studies are discussed.  相似文献   

14.
Matching estimators are widely used in empirical economics for the evaluation of programs or treatments. Researchers using matching methods often apply the bootstrap to calculate the standard errors. However, no formal justification has been provided for the use of the bootstrap in this setting. In this article, we show that the standard bootstrap is, in general, not valid for matching estimators, even in the simple case with a single continuous covariate where the estimator is root‐N consistent and asymptotically normally distributed with zero asymptotic bias. Valid inferential methods in this setting are the analytic asymptotic variance estimator of Abadie and Imbens (2006a) as well as certain modifications of the standard bootstrap, like the subsampling methods in Politis and Romano (1994).  相似文献   

15.
The independence number of a graph and its chromatic number are known to be hard to approximate. Due to recent complexity results, unless coRP = NP, there is no polynomial time algorithm which approximates any of these quantities within a factor of n 1– for graphs on n vertices.We show that the situation is significantly better for the average case. For every edge probability p = p(n) in the range n –1/2+ p 3/4, we present an approximation algorithm for the independence number of graphs on n vertices, whose approximation ratio is O((np)1/2/log n) and whose expected running time over the probability space G(n, p) is polynomial. An algorithm with similar features is described also for the chromatic number.A key ingredient in the analysis of both algorithms is a new large deviation inequality for eigenvalues of random matrices, obtained through an application of Talagrand's inequality.  相似文献   

16.
We give a theoretical answer to a natural question arising from a few years of computational experiments on the problem of sorting a permutation by the minimum number of reversals, which has relevant applications in computational molecular biology. The experiments carried out on the problem showed that the so-called alternating-cycle lower bound is equal to the optimal solution value in almost all cases, and this is the main reason why the state-of-the-art algorithms for the problem are quite effective in practice. Since worst-case analysis cannot give an adequate justification for this observation, we focus our attention on estimating the probability that, for a random permutation of n elements, the above lower bound is not tight. We show that this probability is low even for small n, and asymptotically (1/n5), i.e., O(1/n5) and (1/n5). This gives a satisfactory explanation to empirical observations and shows that the problem of sorting by reversals and its alternating-cycle relaxation are essentially the same problem, with the exception of a small fraction of pathological instances, justifying the use of algorithms which are heavily based on this relaxation. From our analysis we obtain convenient sufficient conditions to test if the alternating-cycle lower bound is tight for a given instance. We also consider the case of signed permutations, for which the analysis is much simpler, and show that the probability that the alternating-cycle lower bound is not tight for a random signed permutation of m elements is asymptotically (1/m2).  相似文献   

17.
The delta method and continuous mapping theorem are among the most extensively used tools in asymptotic derivations in econometrics. Extensions of these methods are provided for sequences of functions that are commonly encountered in applications and where the usual methods sometimes fail. Important examples of failure arise in the use of simulation‐based estimation methods such as indirect inference. The paper explores the application of these methods to the indirect inference estimator (IIE) in first order autoregressive estimation. The IIE uses a binding function that is sample size dependent. Its limit theory relies on a sequence‐based delta method in the stationary case and a sequence‐based implicit continuous mapping theorem in unit root and local to unity cases. The new limit theory shows that the IIE achieves much more than (partial) bias correction. It changes the limit theory of the maximum likelihood estimator (MLE) when the autoregressive coefficient is in the locality of unity, reducing the bias and the variance of the MLE without affecting the limit theory of the MLE in the stationary case. Thus, in spite of the fact that the IIE is a continuously differentiable function of the MLE, the limit distribution of the IIE is not simply a scale multiple of the MLE, but depends implicitly on the full binding function mapping. The unit root case therefore represents an important example of the failure of the delta method and shows the need for an implicit mapping extension of the continuous mapping theorem.  相似文献   

18.
In this paper we present two main results about the inapproximability of the exemplar conserved interval distance problem of genomes. First, we prove that it is NP-complete to decide whether the exemplar conserved interval distance between any two genomes is zero or not. This result implies that the exemplar conserved interval distance problem does not admit any approximation in polynomial time, unless P=NP. In fact, this result holds, even when every gene appears in each of the given genomes at most three times. Second, we strengthen the first result under a weaker definition of approximation, called weak approximation. We show that the exemplar conserved interval distance problem does not admit any weak approximation within a super-linear factor of , where m is the maximal length of the given genomes. We also investigate polynomial time algorithms for solving the exemplar conserved interval distance problem when certain constrains are given. We prove that the zero exemplar conserved interval distance problem of two genomes is decidable in polynomial time when one genome is O(log n)-spanned. We also prove that one can solve the constant-sized exemplar conserved interval distance problem in polynomial time, provided that one genome is trivial.  相似文献   

19.
We study inference in structural models with a jump in the conditional density, where location and size of the jump are described by regression curves. Two prominent examples are auction models, where the bid density jumps from zero to a positive value at the lowest cost, and equilibrium job‐search models, where the wage density jumps from one positive level to another at the reservation wage. General inference in such models remained a long‐standing, unresolved problem, primarily due to nonregularities and computational difficulties caused by discontinuous likelihood functions. This paper develops likelihood‐based estimation and inference methods for these models, focusing on optimal (Bayes) and maximum likelihood procedures. We derive convergence rates and distribution theory, and develop Bayes and Wald inference. We show that Bayes estimators and confidence intervals are attractive both theoretically and computationally, and that Bayes confidence intervals, based on posterior quantiles, provide a valid large sample inference method.  相似文献   

20.
This paper considers the on-line problem of scheduling nonpreemptively n independent jobs on m > 1 identical and parallel machines with the objective to maximize the minimum machine completion time. It is assumed that the values of the processing times are unknown but the order of the jobs by their processing times is known in advance. We are asked to decide the assignment of all the jobs to some machines at time zero by utilizing only ordinal data rather than the actual magnitudes of jobs. Algorithms to slove the problem are called ordinal algorithms. In this paper, we give lower bounds and ordinal algorithms. We first propose an algorithm MIN which is at most -competitive for any m machine case, while the lower bound is i=1 m 1/i. Both are on the order of (ln m). Furthermore, for m = 3, we present an optimal algorithm.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号