首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
This paper considers studentized tests in time series regressions with nonparametrically autocorrelated errors. The studentization is based on robust standard errors with truncation lag M=bT for some constant b∈(0, 1] and sample size T. It is shown that the nonstandard fixed‐b limit distributions of such nonparametrically studentized tests provide more accurate approximations to the finite sample distributions than the standard small‐b limit distribution. We further show that, for typical economic time series, the optimal bandwidth that minimizes a weighted average of type I and type II errors is larger by an order of magnitude than the bandwidth that minimizes the asymptotic mean squared error of the corresponding long‐run variance estimator. A plug‐in procedure for implementing this optimal bandwidth is suggested and simulations (not reported here) confirm that the new plug‐in procedure works well in finite samples.  相似文献   

2.
Elicitation of expert opinion is important for risk analysis when only limited data are available. Expert opinion is often elicited in the form of subjective confidence intervals; however, these are prone to substantial overconfidence. We investigated the influence of elicitation question format, in particular the number of steps in the elicitation procedure. In a 3‐point elicitation procedure, an expert is asked for a lower limit, upper limit, and best guess, the two limits creating an interval of some assigned confidence level (e.g., 80%). In our 4‐step interval elicitation procedure, experts were also asked for a realistic lower limit, upper limit, and best guess, but no confidence level was assigned; the fourth step was to rate their anticipated confidence in the interval produced. In our three studies, experts made interval predictions of rates of infectious diseases (Study 1, n = 21 and Study 2, n = 24: epidemiologists and public health experts), or marine invertebrate populations (Study 3, n = 34: ecologists and biologists). We combined the results from our studies using meta‐analysis, which found average overconfidence of 11.9%, 95% CI [3.5, 20.3] (a hit rate of 68.1% for 80% intervals)—a substantial decrease in overconfidence compared with previous studies. Studies 2 and 3 suggest that the 4‐step procedure is more likely to reduce overconfidence than the 3‐point procedure (Cohen's d = 0.61, [0.04, 1.18]).  相似文献   

3.
In using nominal groups for decision making, it is necessary to use some mechanical procedure for combining the evaluations. A simulation model is used to compare procedures for the case where a nominal group of m evaluators must select the best of n alternatives and where the evaluations are subject to random errors. Criteria are the probability of making a correct selection and the relative quality of the choice.  相似文献   

4.
Bajis Dodin   《Omega》1987,15(6)
The strategic problem of selecting a production plan for a given planning horizon is usually treated as independent of the tactical problem of scheduling the production plan. This paper approaches both selecting the production plan and scheduling it as one problem. The problem is formulated as a zero-one integer program. The formulation accommodates many real-life considerations. The integer program is solved using a branch and bound procedure which provides the optimal production plan and schedule as well as the importance indices of the orders, a concept which is introduced and used in this study to rank the available orders within the planning horizon according to their importance to the firm. The integer program and the search procedure can be used as a decision supporting tool to respond to any changes in the demand information, capacity of the firm, or its operating strategy, and it guarantees the selection of feasible production plan(s) and optimal schedules.  相似文献   

5.
The focus of this paper is the nonparametric estimation of an instrumental regression function ϕ defined by conditional moment restrictions that stem from a structural econometric model E[Yϕ(Z)|W]=0, and involve endogenous variables Y and Z and instruments W. The function ϕ is the solution of an ill‐posed inverse problem and we propose an estimation procedure based on Tikhonov regularization. The paper analyzes identification and overidentification of this model, and presents asymptotic properties of the estimated nonparametric instrumental regression function.  相似文献   

6.
In this paper we show that a striking improvement in the explanatory power of a “dividend type” of security valuation model can be obtained by classifying companies into equivalent risk categories, estimating the discount factor for a category, and then constructing a cross-sectional model for it. The increased homogenity of the data base improves the model's sensitivity to systematic forces, but does not sacrifice the heterogeneity of the independent variables. Assuming that the difference between the intrinsic value of a security and its market value should be zero, the authors demonstrate a method for estimating kjt, the market discount rate for the jth risk category in the tth period. The results of the estimation procedure appear to be reasonable and when used in our security valuation model they produce higher coefficients of determination (R2) than those previously published for similar models.  相似文献   

7.
In this paper, we study a composition (decomposition) technique for the triangle-free subgraph polytope in graphs which are decomposable by means of 3-sums satisfying some property. If a graph G decomposes into two graphs G 1 and G 2, we show that the triangle-free subgraph polytope of G can be described from two linear systems related to G 1 and G 2. This gives a way to characterize this polytope on graphs that can be recursively decomposed. This also gives a procedure to derive new facets for this polytope. We also show that, if the systems associated with G 1 and G 2 are TDI, then the system characterizing the polytope for G is TDI. This generalizes previous results in R. Euler and A.R. Mahjoub (Journal of Comb. Theory series B, vol. 53, no. 2, pp. 235–259, 1991) and A.R. Mahjoub (Discrete Applied Math., vol. 62, pp. 209–219, 1995).  相似文献   

8.
This paper introduces the model confidence set (MCS) and applies it to the selection of models. A MCS is a set of models that is constructed such that it will contain the best model with a given level of confidence. The MCS is in this sense analogous to a confidence interval for a parameter. The MCS acknowledges the limitations of the data, such that uninformative data yield a MCS with many models, whereas informative data yield a MCS with only a few models. The MCS procedure does not assume that a particular model is the true model; in fact, the MCS procedure can be used to compare more general objects, beyond the comparison of models. We apply the MCS procedure to two empirical problems. First, we revisit the inflation forecasting problem posed by Stock and Watson (1999), and compute the MCS for their set of inflation forecasts. Second, we compare a number of Taylor rule regressions and determine the MCS of the best regression in terms of in‐sample likelihood criteria.  相似文献   

9.
This paper describes a periodic review, fixed lead time, single-product, single-facility model with random demand, lost sales and service constraints that was developed for potential application at a Western Canadian retailer. The objective of this study was to determine optimal (s, S) policies for a large number of products and locations. To this end, we evaluate the long run average cost and service level for a fixed (s, S) policy and then used a search procedure to locate an optimal policy. The search procedure is based on an efficient updating scheme for the transition probability matrix of the underlying Markov chain, bounds on S and monotonicity assumptions on the cost and service level functions. A distinguishing feature of this model is that lead times are shorter than review periods so that the stationary analysis underlying computation of costs and service levels requires subtle analyses. We compared the computed policies to those currently in use on a test bed of 420 products and found that stores currently hold inventories that are 40% to 50% higher than those recommended by our model and estimate that implementing the proposed policies for the entire system would result in significant cost savings.  相似文献   

10.
Preemptive Machine Covering on Parallel Machines   总被引:2,自引:0,他引:2  
This paper investigates the preemptive parallel machine scheduling to maximize the minimum machine completion time. We first show the off-line version can be solved in O(mn) time for general m-uniform-machine case. Then we study the on-line version. We show that any randomized on-line algorithm must have a competitive ratio m for m-uniform-machine case and ∑i = 1m1/i for m-identical-machine case. Lastly, we focus on two-uniform-machine case. We present an on-line deterministic algorithm whose competitive ratio matches the lower bound of the on-line problem for every machine speed ratio s≥ 1. We further consider the case that idle time is allowed to be introduced in the procedure of assigning jobs and the objective becomes to maximize the continuous period of time (starting from time zero) when both machines are busy. We present an on-line deterministic algorithm whose competitive ratio matches the lower bound of the problem for every s≥ 1. We show that randomization does not help.  相似文献   

11.
Because the eight largest bank failures in United States history have occurred since 1973 [24], the development of early-warning problem-bank identification models is an important undertaking. It has been shown previously [3] [5] that M-estimator robust regression provides such a model. The present paper develops a similar model for the multivariate case using both a robustified Mahalanobis distance analysis [21] and principal components analysis [10]. In addition to providing a successful presumptive problem-bank identification model, combining the use of the M-estimator robust regression procedure and the robust Mahalanobis distance procedure with principal components analysis is also demonstrated to be a general method of outlier detection. The results from using these procedures are compared to some previously suggested procedures, and general conclusions are drawn.  相似文献   

12.
13.
Chungui Qiao 《LABOUR》2005,19(4):767-800
Abstract. Structure‐preserving estimation (SPREE) is currently used to derive small‐area estimates of unemployment in New Zealand using data from the Household Labour Force Survey and the Ministry of Social Development. Noble et al. (Journal of Official Statistics 18: 45–60, 2002) advocate loglinear modelling as a major improvement and substitute for SPREE. The algorithm, however, is difficult to implement in SAS, the common statistical platform for the public sector, because of three major problems: (1) their way of writing the design matrix is incompatible with the ‘Proc Genmod’ procedure in SAS; (2) an important step in estimating cell frequencies from survey margins is unclear in the modelling procedure; and (3) the user has to manually write the design matrix of the model. This paper resolves these problems, provides novel SAS programs for implementing the approach, and discusses the implications.  相似文献   

14.
This paper introduces a novel bootstrap procedure to perform inference in a wide class of partially identified econometric models. We consider econometric models defined by finitely many weak moment inequalities, 2 We can also admit models defined by moment equalities by combining pairs of weak moment inequalities.
which encompass many applications of economic interest. The objective of our inferential procedure is to cover the identified set with a prespecified probability. 3 We deal with the objective of covering each element of the identified set with a prespecified probability in Bugni (2010a).
We compare our bootstrap procedure, a competing asymptotic approximation, and subsampling procedures in terms of the rate at which they achieve the desired coverage level, also known as the error in the coverage probability. Under certain conditions, we show that our bootstrap procedure and the asymptotic approximation have the same order of error in the coverage probability, which is smaller than that obtained by using subsampling. This implies that inference based on our bootstrap and asymptotic approximation should eventually be more precise than inference based on subsampling. A Monte Carlo study confirms this finding in a small sample simulation.  相似文献   

15.

This paper addresses the two-machine bicriteria dynamic flowshop problem where setup time of a job is separated from its processing time and is sequenced independently. The performance considered is the simultaneous minimization of total flowtime and makespan, which is more effective in reducing the total scheduling cost compared to the single objective. A frozen-event procedure is first proposed to transform a dynamic scheduling problem into a static one. To solve the transformed static scheduling problem, an integer programming model with N 2 + 5N variables and 7N constraints is formulated. Because the problem is known to be NP-complete, a heuristic algorithm with the complexity of O (N 3) is provided. A decision index is developed as the basis for the heuristic. Experimental results show that the proposed heuristic algorithm is effective and efficient. The average solution quality of the heuristic algorithm is above 99%. A 15-job case requires only 0.0235 s, on average, to obtain a near or even optimal solution.  相似文献   

16.
This paper provides computationally intensive, yet feasible methods for inference in a very general class of partially identified econometric models. Let P denote the distribution of the observed data. The class of models we consider is defined by a population objective function Q(θ, P) for θΘ. The point of departure from the classical extremum estimation framework is that it is not assumed that Q(θ, P) has a unique minimizer in the parameter space Θ. The goal may be either to draw inferences about some unknown point in the set of minimizers of the population objective function or to draw inferences about the set of minimizers itself. In this paper, the object of interest is Θ0(P)=argminθΘQ(θ, P), and so we seek random sets that contain this set with at least some prespecified probability asymptotically. We also consider situations where the object of interest is the image of Θ0(P) under a known function. Random sets that satisfy the desired coverage property are constructed under weak assumptions. Conditions are provided under which the confidence regions are asymptotically valid not only pointwise in P, but also uniformly in P. We illustrate the use of our methods with an empirical study of the impact of top‐coding outcomes on inferences about the parameters of a linear regression. Finally, a modest simulation study sheds some light on the finite‐sample behavior of our procedure.  相似文献   

17.
Abstract

The central theme of this article is performance management, defined as activities of organizations aimed at an effective and efficient use of their human resources. The organization focused on in particular is the hospital. Three principles taken from motivation theory are dealt with which are basic to performance management: goal setting, feedback and reinforcement. Next, a recently developed procedure (Pritchard 1990, Pritchard et al. 1988, 1989) for the design of performance management systems is described. This procedure, ProMES: Productivity Measurement and Enhancement Systems, is explained using a team of ward nurses as a hypothetical example. In addition to the nursing wards example, other potential applications of the ProMES technique to several hospitals areas are mentioned. Finally, some conditions that should be fulfilled in order to successfully start a ProMES project are discussed.  相似文献   

18.
We consider the specially structured (pure) integer Quadratic Multi-Knapsack Problem (QMKP) tackled in the paper “Exact solution methods to solve large scale integer quadratic knapsack problems” by D. Quadri, E. Soutif and P. Tolla (2009), recently appeared on this journal, where the problem is solved by transforming it into an equivalent 0–1 linearized Multi-Knapsack Problem (MKP). We show that, by taking advantage of the structure of the transformed (MKP), it is possible to derive an effective variable fixing procedure leading to an improved branch-and-bound approach. This procedure reduces dramatically the resulting linear problem size inducing an impressive improvement in the performances of the related branch and bound approach when compared to the results of the approach proposed by D. Quadri, E. Soutif and P. Tolla.  相似文献   

19.
This paper provides new estimates for male-female earnings differentials in Russia, incorporating the use of the Heckman (Econometrica 47: 153–161, 1979) two-step procedure for sample selection bias. This is a necessary adjustment in the case of female earnings because women who participate in the labour market may be a non-random sub-set of those who could work. This is a technique that enables the participation decision of women to be modelled and their earnings corrected for self-selection. The gender gap is then calculated using Oaxaca (International Economic Review 14: 693–709, 1973) and Reimers’ (Review of Economics and Statistics 65: 570–579, 1983) methods. The results indicate that the unexplained part of the earnings differential is smaller than in other studies that did not correct for sample selection.  相似文献   

20.
We study the incentives of candidates to strategically affect the outcome of a voting procedure. We show that the outcomes of every nondictatorial voting procedure that satisfies unanimity will be affected by the incentives of noncontending candidates (i.e., who cannot win the election) to influence the outcome by entering or exiting the election.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号