首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 640 毫秒
1.
This paper considers the problem of choosing the number of bootstrap repetitions B for bootstrap standard errors, confidence intervals, confidence regions, hypothesis tests, p‐values, and bias correction. For each of these problems, the paper provides a three‐step method for choosing B to achieve a desired level of accuracy. Accuracy is measured by the percentage deviation of the bootstrap standard error estimate, confidence interval length, test's critical value, test's p‐value, or bias‐corrected estimate based on B bootstrap simulations from the corresponding ideal bootstrap quantities for which B=. The results apply quite generally to parametric, semiparametric, and nonparametric models with independent and dependent data. The results apply to the standard nonparametric iid bootstrap, moving block bootstraps for time series data, parametric and semiparametric bootstraps, and bootstraps for regression models based on bootstrapping residuals. Monte Carlo simulations show that the proposed methods work very well.  相似文献   

2.
The U.S. Environmental Protection Agency's cancer guidelines ( USEPA, 2005 ) present the default approach for the cancer slope factor (denoted here as s*) as the slope of the linear extrapolation to the origin, generally drawn from the 95% lower confidence limit on dose at the lowest prescribed risk level supported by the data. In the past, the cancer slope factor has been calculated as the upper 95% confidence limit on the coefficient (q*1) of the linear term of the multistage model for the extra cancer risk over background. To what extent do the two approaches differ in practice? We addressed this issue by calculating s* and q*1 for 102 data sets for 60 carcinogens using the constrained multistage model to fit the dose‐response data. We also examined how frequently the fitted dose‐response curves departed appreciably from linearity at low dose by comparing q1, the coefficient of the linear term in the multistage polynomial, with a slope factor, sc, derived from a point of departure based on the maximum liklihood estimate of the dose‐response. Another question we addressed is the extent to which s* exceeded sc for various levels of extra risk. For the vast majority of chemicals, the prescribed default EPA methodology for the cancer slope factor provides values very similar to that obtained with the traditionally estimated q*1. At 10% extra risk, q*1/s* is greater than 0.3 for all except one data set; for 82% of the data sets, q*1 is within 0.9 to 1.1 of s*. At the 10% response level, the interquartile range of the ratio, s*/sc, is 1.4 to 2.0.  相似文献   

3.
Recognizing the potential importance of new uses of analysis, the author provides an alternative approach for constructing the confidence interval (C.I.) for the population coefficient of variation, V=α/μ. More specifically, the comment demonstrates the use of some of the existing percentage points of the sample coefficient of variation, v=s/x . The results of these approximations are then compared empirically with the results recently published by Hayya, Copeland and Chan [1, pp. 115–118]. The available approximations given by Iglewicz [2] and again by Iglewicz and Myers [3] of the practical range of V and of the sample size n are more accurate than those obtained using the method in [1].  相似文献   

4.
This paper establishes the higher‐order equivalence of the k‐step bootstrap, introduced recently by Davidson and MacKinnon (1999), and the standard bootstrap. The k‐step bootstrap is a very attractive alternative computationally to the standard bootstrap for statistics based on nonlinear extremum estimators, such as generalized method of moment and maximum likelihood estimators. The paper also extends results of Hall and Horowitz (1996) to provide new results regarding the higher‐order improvements of the standard bootstrap and the k‐step bootstrap for extremum estimators (compared to procedures based on first‐order asymptotics). The results of the paper apply to Newton‐Raphson (NR), default NR, line‐search NR, and Gauss‐Newton k‐step bootstrap procedures. The results apply to the nonparametric iid bootstrap and nonoverlapping and overlapping block bootstraps. The results cover symmetric and equal‐tailed two‐sided t tests and confidence intervals, one‐sided t tests and confidence intervals, Wald tests and confidence regions, and J tests of over‐identifying restrictions.  相似文献   

5.
Pandu R Tadikamalla 《Omega》1984,12(6):575-581
Several distributions have been used for approximating the lead time demand distribution in inventory systems. We compare five distributions, the normal, the logistic, the lognormal, the gamma and the Weibull for obtaining the expected number of back orders, the reorder levels to have a given protection and the optimal order quantity, reorder levels in continuous review models of (Q, r) type. The normal and the logistic distributions are inadequate to represent the situations where the coefficient of variation (the ratio of the standard deviation to the mean) of the lead time demand distribution is large. The lognormal, the gamma and the Weibull distributions are versatile and adequate; however the lognormal seems to be a viable candidate because of its computational simplicity.  相似文献   

6.
We consider the situation when there is a large number of series, N, each with T observations, and each series has some predictive ability for some variable of interest. A methodology of growing interest is first to estimate common factors from the panel of data by the method of principal components and then to augment an otherwise standard regression with the estimated factors. In this paper, we show that the least squares estimates obtained from these factor‐augmented regressions are consistent and asymptotically normal if . The conditional mean predicted by the estimated factors is consistent and asymptotically normal. Except when T/N goes to zero, inference should take into account the effect of “estimated regressors” on the estimated conditional mean. We present analytical formulas for prediction intervals that are valid regardless of the magnitude of N/T and that can also be used when the factors are nonstationary.  相似文献   

7.
Elicitation of expert opinion is important for risk analysis when only limited data are available. Expert opinion is often elicited in the form of subjective confidence intervals; however, these are prone to substantial overconfidence. We investigated the influence of elicitation question format, in particular the number of steps in the elicitation procedure. In a 3‐point elicitation procedure, an expert is asked for a lower limit, upper limit, and best guess, the two limits creating an interval of some assigned confidence level (e.g., 80%). In our 4‐step interval elicitation procedure, experts were also asked for a realistic lower limit, upper limit, and best guess, but no confidence level was assigned; the fourth step was to rate their anticipated confidence in the interval produced. In our three studies, experts made interval predictions of rates of infectious diseases (Study 1, n = 21 and Study 2, n = 24: epidemiologists and public health experts), or marine invertebrate populations (Study 3, n = 34: ecologists and biologists). We combined the results from our studies using meta‐analysis, which found average overconfidence of 11.9%, 95% CI [3.5, 20.3] (a hit rate of 68.1% for 80% intervals)—a substantial decrease in overconfidence compared with previous studies. Studies 2 and 3 suggest that the 4‐step procedure is more likely to reduce overconfidence than the 3‐point procedure (Cohen's d = 0.61, [0.04, 1.18]).  相似文献   

8.
This paper considers inference in a broad class of nonregular models. The models considered are nonregular in the sense that standard test statistics have asymptotic distributions that are discontinuous in some parameters. It is shown in Andrews and Guggenberger (2009a) that standard fixed critical value, subsampling, and m out of n bootstrap methods often have incorrect asymptotic size in such models. This paper introduces general methods of constructing tests and confidence intervals that have correct asymptotic size. In particular, we consider a hybrid subsampling/fixed‐critical‐value method and size‐correction methods. The paper discusses two examples in detail. They are (i) confidence intervals in an autoregressive model with a root that may be close to unity and conditional heteroskedasticity of unknown form and (ii) tests and confidence intervals based on a post‐conservative model selection estimator.  相似文献   

9.
A recent paper in this journal (Fann et al., 2012) estimated that “about 80,000 premature mortalities would be avoided by lowering PM2.5 levels to 5 μg/m3 nationwide” and that 2005 levels of PM2.5 cause about 130,000 premature mortalities per year among people over age 29, with a 95% confidence interval of 51,000 to 200,000 premature mortalities per year.(1) These conclusions depend entirely on misinterpreting statistical coefficients describing the association between PM2.5 and mortality rates in selected studies and models as if they were known to be valid causal coefficients. But they are not, and both the expert opinions of EPA researchers and analysis of data suggest that a true value of zero for the PM2.5 mortality causal coefficient is not excluded by available data. Presenting continuous confidence intervals that exclude the discrete possibility of zero misrepresents what is currently known (and not known) about the hypothesized causal relation between changes in PM2.5 levels and changes in mortality rates, suggesting greater certainty about projected health benefits than is justified.  相似文献   

10.
In the regression‐discontinuity (RD) design, units are assigned to treatment based on whether their value of an observed covariate exceeds a known cutoff. In this design, local polynomial estimators are now routinely employed to construct confidence intervals for treatment effects. The performance of these confidence intervals in applications, however, may be seriously hampered by their sensitivity to the specific bandwidth employed. Available bandwidth selectors typically yield a “large” bandwidth, leading to data‐driven confidence intervals that may be biased, with empirical coverage well below their nominal target. We propose new theory‐based, more robust confidence interval estimators for average treatment effects at the cutoff in sharp RD, sharp kink RD, fuzzy RD, and fuzzy kink RD designs. Our proposed confidence intervals are constructed using a bias‐corrected RD estimator together with a novel standard error estimator. For practical implementation, we discuss mean squared error optimal bandwidths, which are by construction not valid for conventional confidence intervals but are valid with our robust approach, and consistent standard error estimators based on our new variance formulas. In a special case of practical interest, our procedure amounts to running a quadratic instead of a linear local regression. More generally, our results give a formal justification to simple inference procedures based on increasing the order of the local polynomial estimator employed. We find in a simulation study that our confidence intervals exhibit close‐to‐correct empirical coverage and good empirical interval length on average, remarkably improving upon the alternatives available in the literature. All results are readily available in R and STATA using our companion software packages described in Calonico, Cattaneo, and Titiunik (2014d, 2014b).  相似文献   

11.
Organizations in several domains including national security intelligence communicate judgments under uncertainty using verbal probabilities (e.g., likely) instead of numeric probabilities (e.g., 75% chance), despite research indicating that the former have variable meanings across individuals. In the intelligence domain, uncertainty is also communicated using terms such as low, moderate, or high to describe the analyst's confidence level. However, little research has examined how intelligence professionals interpret these terms and whether they prefer them to numeric uncertainty quantifiers. In two experiments (N = 481 and 624, respectively), uncertainty communication preferences of expert (n = 41 intelligence analysts in Experiment 1) and nonexpert intelligence consumers were elicited. We examined which format participants judged to be more informative and simpler to process. We further tested whether participants treated verbal probability and confidence terms as independent constructs and whether participants provided coherent numeric probability translations of verbal probabilities. Results showed that although most nonexperts favored the numeric format, experts were about equally split, and most participants in both samples regarded the numeric format as more informative. Experts and nonexperts consistently conflated probability and confidence. For instance, confidence intervals inferred from verbal confidence terms had a greater effect on the location of the estimate than the width of the estimate, contrary to normative expectation. Approximately one-fourth of experts and over one-half of nonexperts provided incoherent numeric probability translations for the terms likely and unlikely when the elicitation of best estimates and lower and upper bounds were briefly spaced by intervening tasks.  相似文献   

12.
In uncertain environments, the master production schedule (MPS) is usually developed using a rolling schedule. When utilizing a rolling schedule, the MPS is replanned periodically and a portion of the MPS is frozen in each planning cycle. The cost performance of a rolling schedule depends on three decisions: the choice of the replanning interval (R), which determines how often the MPS should be replanned; the choice of the frozen interval (F), which determines how many periods the MPS should be frozen in each planning cycle; and the choice of the forecast window (T), which is the time interval over which the MPS is determined using newly updated forecast data. This paper uses an analytical approach to study the master production scheduling process in uncertain environments without capacity constraints, where the MPS is developed using a rolling schedule. It focuses on the choices of F, R, and T for the MPS. A conceptual framework that includes all important MPS time intervals is described. The effects of F, R, and T on system costs, which include the forecast error, MPS change, setup, and inventory holding costs, are also explored. Finally, a mathematical model for the MPS is presented. This model approximates the average system cost as a function of F, R, T, and several environmental factors. It can be used to estimate the associated system costs for any combination of F, R, and T.  相似文献   

13.
Many approaches to estimation of panel models are based on an average or integrated likelihood that assigns weights to different values of the individual effects. Fixed effects, random effects, and Bayesian approaches all fall into this category. We provide a characterization of the class of weights (or priors) that produce estimators that are first‐order unbiased. We show that such bias‐reducing weights will depend on the data in general unless an orthogonal reparameterization or an essentially equivalent condition is available. Two intuitively appealing weighting schemes are discussed. We argue that asymptotically valid confidence intervals can be read from the posterior distribution of the common parameters when N and T grow at the same rate. Next, we show that random effects estimators are not bias reducing in general and we discuss important exceptions. Moreover, the bias depends on the Kullback–Leibler distance between the population distribution of the effects and its best approximation in the random effects family. Finally, we show that, in general, standard random effects estimation of marginal effects is inconsistent for large T, whereas the posterior mean of the marginal effect is large‐T consistent, and we provide conditions for bias reduction. Some examples and Monte Carlo experiments illustrate the results.  相似文献   

14.
The coefficient of relative risk aversion is a key parameter for analyses of behavior toward risk, but good estimates of this parameter do not exist. A promising place for reliable estimation is rare macroeconomic disasters, which have a major influence on the equity premium. The premium depends on the probability and size distribution of disasters, gauged by proportionate declines in per capita consumption or gross domestic product. Long‐term national‐accounts data for 36 countries provide a large sample of disasters of magnitude 10% or more. A power‐law density provides a good fit to the size distribution, and the upper‐tail exponent, α, is estimated to be around 4. A higher α signifies a thinner tail and, therefore, a lower equity premium, whereas a higher coefficient of relative risk aversion, γ, implies a higher premium. The premium is finite if α > γ. The observed premium of 5% generates an estimated γ close to 3, with a 95% confidence interval of 2 to 4. The results are robust to uncertainty about the values of the disaster probability and the equity premium, and can accommodate seemingly paradoxical situations in which the equity premium may appear to be infinite.  相似文献   

15.
Local to unity limit theory is used in applications to construct confidence intervals (CIs) for autoregressive roots through inversion of a unit root test (Stock (1991)). Such CIs are asymptotically valid when the true model has an autoregressive root that is local to unity (ρ = 1 + c/n), but are shown here to be invalid at the limits of the domain of definition of the localizing coefficient c because of a failure in tightness and the escape of probability mass. Failure at the boundary implies that these CIs have zero asymptotic coverage probability in the stationary case and vicinities of unity that are wider than O(n−1/3). The inversion methods of Hansen (1999) and Mikusheva (2007) are asymptotically valid in such cases. Implications of these results for predictive regression tests are explored. When the predictive regressor is stationary, the popular Campbell and Yogo (2006) CIs for the regression coefficient have zero coverage probability asymptotically, and their predictive test statistic Q erroneously indicates predictability with probability approaching unity when the null of no predictability holds. These results have obvious cautionary implications for the use of the procedures in empirical practice.  相似文献   

16.
This paper proposes a new framework for determining whether a given relationship is nonlinear, what the nonlinearity looks like, and whether it is adequately described by a particular parametric model. The paper studies a regression or forecasting model of the form yt=μ( x t)+εt where the functional form of μ(⋅) is unknown. We propose viewing μ(⋅) itself as the outcome of a random process. The paper introduces a new stationary random field m(⋅) that generalizes finite‐differenced Brownian motion to a vector field and whose realizations could represent a broad class of possible forms for μ(⋅). We view the parameters that characterize the relation between a given realization of m(⋅) and the particular value of μ(⋅) for a given sample as population parameters to be estimated by maximum likelihood or Bayesian methods. We show that the resulting inference about the functional relation also yields consistent estimates for a broad class of deterministic functions μ(⋅). The paper further develops a new test of the null hypothesis of linearity based on the Lagrange multiplier principle and small‐sample confidence intervals based on numerical Bayesian methods. An empirical application suggests that properly accounting for the nonlinearity of the inflation‐unemployment trade‐off may explain the previously reported uneven empirical success of the Phillips Curve.  相似文献   

17.
In this paper, we propose a simple bias–reduced log–periodogram regression estimator, ^dr, of the long–memory parameter, d, that eliminates the first– and higher–order biases of the Geweke and Porter–Hudak (1983) (GPH) estimator. The bias–reduced estimator is the same as the GPH estimator except that one includes frequencies to the power 2k for k=1,…,r, for some positive integer r, as additional regressors in the pseudo–regression model that yields the GPH estimator. The reduction in bias is obtained using assumptions on the spectrum only in a neighborhood of the zero frequency. Following the work of Robinson (1995b) and Hurvich, Deo, and Brodsky (1998), we establish the asymptotic bias, variance, and mean–squared error (MSE) of ^dr, determine the asymptotic MSE optimal choice of the number of frequencies, m, to include in the regression, and establish the asymptotic normality of ^dr. These results show that the bias of ^dr goes to zero at a faster rate than that of the GPH estimator when the normalized spectrum at zero is sufficiently smooth, but that its variance only is increased by a multiplicative constant. We show that the bias–reduced estimator ^dr attains the optimal rate of convergence for a class of spectral densities that includes those that are smooth of order s≥1 at zero when r≥(s−2)/2 and m is chosen appropriately. For s>2, the GPH estimator does not attain this rate. The proof uses results of Giraitis, Robinson, and Samarov (1997). We specify a data–dependent plug–in method for selecting the number of frequencies m to minimize asymptotic MSE for a given value of r. Some Monte Carlo simulation results for stationary Gaussian ARFIMA (1, d, 1) and (2, d, 0) models show that the bias–reduced estimators perform well relative to the standard log–periodogram regression estimator.  相似文献   

18.
The availability of high frequency financial data has generated a series of estimators based on intra‐day data, improving the quality of large areas of financial econometrics. However, estimating the standard error of these estimators is often challenging. The root of the problem is that traditionally, standard errors rely on estimating a theoretically derived asymptotic variance, and often this asymptotic variance involves substantially more complex quantities than the original parameter to be estimated. Standard errors are important: they are used to assess the precision of estimators in the form of confidence intervals, to create “feasible statistics” for testing, to build forecasting models based on, say, daily estimates, and also to optimize the tuning parameters. The contribution of this paper is to provide an alternative and general solution to this problem, which we call Observed Asymptotic Variance. It is a general nonparametric method for assessing asymptotic variance (AVAR). It provides consistent estimators of AVAR for a broad class of integrated parameters Θ = ∫ θt dt, where the spot parameter process θ can be a general semimartingale, with continuous and jump components. The observed AVAR is implemented with the help of a two‐scales method. Its construction works well in the presence of microstructure noise, and when the observation times are irregular or asynchronous in the multivariate case. The methodology is valid for a wide variety of estimators, including the standard ones for variance and covariance, and also for more complex estimators, such as, of leverage effects, high frequency betas, and semivariance.  相似文献   

19.
This paper investigates two approaches to patient classification: using patient classification only for sequencing patient appointments at the time of booking and using patient classification for both sequencing and appointment interval adjustment. In the latter approach, appointment intervals are adjusted to match the consultation time characteristics of different patient classes. Our simulation results indicate that new appointment systems that utilize interval adjustment for patient class are successful in improving doctors' idle time, doctors' overtime and patients' waiting times without any trade‐offs. Best performing appointment systems are identified for different clinic environments characterized by walk‐ins, no‐shows, the percentage of new patients, and the ratio of the mean consultation time of new patients to the mean consultation time of return patients. As a result, practical guidelines are developed for managers who are responsible for designing appointment systems.  相似文献   

20.
We consider a single-echelon inventory installation under the (s,S,T) periodic review ordering policy. Demand is stationary random and, when unsatisfied, is backordered. Under a standard cost structure, we seek to minimize total average cost in all three policy variables; namely, the reorder level s, the order-up-to level S and the review interval T. Considering time to be continuous, we first model average total cost per unit time in terms of the decision variables. We then show that the problem can be decomposed into two simpler sub-problems; namely, the determination of locally optimal solutions in s and S (for any T) and the determination of the optimal T. We establish simple bounds and properties that allow solving both these sub-problems and propose a procedure that guarantees global optimum determination in all policy variables via finite search. Computational results reveal that the usual practice of not treating the review interval as a decision variable may carry severe cost penalties. Moreover, cost differences between (s,S,T) and other standard periodic review policies, including the simple base stock policy, are rather marginal (or even zero), when all policies are globally optimized. We provide a physical interpretation of this behavior and discuss its practical implications.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号