首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 640 毫秒
1.
This paper proposes a new nested algorithm (NPL) for the estimation of a class of discrete Markov decision models and studies its statistical and computational properties. Our method is based on a representation of the solution of the dynamic programming problem in the space of conditional choice probabilities. When the NPL algorithm is initialized with consistent nonparametric estimates of conditional choice probabilities, successive iterations return a sequence of estimators of the structural parameters which we call K–stage policy iteration estimators. We show that the sequence includes as extreme cases a Hotz–Miller estimator (for K=1) and Rust's nested fixed point estimator (in the limit when K→∞). Furthermore, the asymptotic distribution of all the estimators in the sequence is the same and equal to that of the maximum likelihood estimator. We illustrate the performance of our method with several examples based on Rust's bus replacement model. Monte Carlo experiments reveal a trade–off between finite sample precision and computational cost in the sequence of policy iteration estimators.  相似文献   

2.
We examine challenges to estimation and inference when the objects of interest are nondifferentiable functionals of the underlying data distribution. This situation arises in a number of applications of bounds analysis and moment inequality models, and in recent work on estimating optimal dynamic treatment regimes. Drawing on earlier work relating differentiability to the existence of unbiased and regular estimators, we show that if the target object is not differentiable in the parameters of the data distribution, there exist no estimator sequences that are locally asymptotically unbiased or α‐quantile unbiased. This places strong limits on estimators, bias correction methods, and inference procedures, and provides motivation for considering other criteria for evaluating estimators and inference procedures, such as local asymptotic minimaxity and one‐sided quantile unbiasedness.  相似文献   

3.
Ismail Karaoglan  Imdat Kara 《Omega》2012,40(4):465-477
In this paper, we consider a variant of the Location-Routing Problem (LRP), namely the LRP with simultaneous pickup and delivery (LRPSPD). The LRPSPD seeks to minimize total cost by simultaneously locating the depots and designing the vehicle routes that satisfy pickup and delivery demand of each customer at the same time. We propose two polynomial-size mixed integer linear programming formulations for the problem and a family of valid inequalities to strengthen the formulations. While the first formulation is a node-based formulation, the second one is a flow-based formulation. Furthermore, we propose a two-phase heuristic approach based on simulated annealing, tp_SA, to solve the large-size LRPSPD and two initialization heuristics to generate an initial solution for the tp_SA. We then empirically evaluate the strengths of the proposed formulations with respect to their ability to find optimal solutions or strong lower bounds, and investigate the performance of the proposed heuristic approach. Computational results show that the flow-based formulation performs better than the node-based formulation in terms of the solution quality and the computation time on small-size problems. However, the node-based formulation can yield competitive lower bounds in a reasonable amount of time on medium-size problems. Meantime, the proposed heuristic approach is computationally efficient in finding good quality solutions for the LRPSPD.  相似文献   

4.
Matching estimators for average treatment effects are widely used in evaluation research despite the fact that their large sample properties have not been established in many cases. The absence of formal results in this area may be partly due to the fact that standard asymptotic expansions do not apply to matching estimators with a fixed number of matches because such estimators are highly nonsmooth functionals of the data. In this article we develop new methods for analyzing the large sample properties of matching estimators and establish a number of new results. We focus on matching with replacement with a fixed number of matches. First, we show that matching estimators are not N1/2‐consistent in general and describe conditions under which matching estimators do attain N1/2‐consistency. Second, we show that even in settings where matching estimators are N1/2‐consistent, simple matching estimators with a fixed number of matches do not attain the semiparametric efficiency bound. Third, we provide a consistent estimator for the large sample variance that does not require consistent nonparametric estimation of unknown functions. Software for implementing these methods is available in Matlab, Stata, and R.  相似文献   

5.
The identification of interaction faults in component-based systems has focused on indicating the presence of faults, rather than their location and magnitude. While this is a valuable step in screening a system for interaction faults prior to its release, it provides little information to assist in the correction of such faults. Consequently tests to reveal the location of interaction faults are of interest. The problem of nonadaptive location of interaction faults is formalized under the hypothesis that the system contains (at most) some number d of faults, each involving (at most) some number t of interacting factors. Restrictions on the number and size of the putative faults lead to numerous variants of the basic problem. The relationships between this class of problems and interaction testing using covering arrays to indicate the presence of faults, designed experiments to measure and model faults, and combinatorial group testing to locate faults in a more general testing scenario, are all examined. While each has some definite similarities with the fault location problems for component-based systems, each has some striking differences as well. In this paper, we formulate the combinatorial problems for locating and detecting arrays to undertake interaction fault location. Necessary conditions for existence are established, and using a close connection to covering arrays, asymptotic bounds on the size of minimal locating and detecting arrays are established.  相似文献   

6.
We develop a practical and novel method for inference on intersection bounds, namely bounds defined by either the infimum or supremum of a parametric or nonparametric function, or, equivalently, the value of a linear programming problem with a potentially infinite constraint set. We show that many bounds characterizations in econometrics, for instance bounds on parameters under conditional moment inequalities, can be formulated as intersection bounds. Our approach is especially convenient for models comprised of a continuum of inequalities that are separable in parameters, and also applies to models with inequalities that are nonseparable in parameters. Since analog estimators for intersection bounds can be severely biased in finite samples, routinely underestimating the size of the identified set, we also offer a median‐bias‐corrected estimator of such bounds as a by‐product of our inferential procedures. We develop theory for large sample inference based on the strong approximation of a sequence of series or kernel‐based empirical processes by a sequence of “penultimate” Gaussian processes. These penultimate processes are generally not weakly convergent, and thus are non‐Donsker. Our theoretical results establish that we can nonetheless perform asymptotically valid inference based on these processes. Our construction also provides new adaptive inequality/moment selection methods. We provide conditions for the use of nonparametric kernel and series estimators, including a novel result that establishes strong approximation for any general series estimator admitting linearization, which may be of independent interest.  相似文献   

7.
J.M. Wilson 《Omega》1996,24(6):681-688
A series of approaches is presented to formulate statistical classification problems using integer programming. The formulations attempt to maximize the number of observations that can be properly classified and utilize single function, multiple function and hierarchical multiple function approaches to the problems. The formulations are tested using standard software on a sample problem and new approaches are compared to those of other authors. As the solution of such problems gives rise to various awkward features in an integer programming framework, it is demonstrated that new approaches to formulation will not be completely successful in avoiding the difficulties of existing methods, but demonstrate certain gains.  相似文献   

8.
An important objective of empirical research on treatment response is to provide decision makers with information useful in choosing treatments. This paper studies minimax‐regret treatment choice using the sample data generated by a classical randomized experiment. Consider a utilitarian social planner who must choose among the feasible statistical treatment rules, these being functions that map the sample data and observed covariates of population members into a treatment allocation. If the planner knew the population distribution of treatment response, the optimal treatment rule would maximize mean welfare conditional on all observed covariates. The appropriate use of covariate information is a more subtle matter when only sample data on treatment response are available. I consider the class of conditional empirical success rules; that is, rules assigning persons to treatments that yield the best experimental outcomes conditional on alternative subsets of the observed covariates. I derive a closed‐form bound on the maximum regret of any such rule. Comparison of the bounds for rules that condition on smaller and larger subsets of the covariates yields sufficient sample sizes for productive use of covariate information. When the available sample size exceeds the sufficiency boundary, a planner can be certain that conditioning treatment choice on more covariates is preferable (in terms of minimax regret) to conditioning on fewer covariates.  相似文献   

9.
Annual concentrations of toxic air contaminants are of primary concern from the perspective of chronic human exposure assessment and risk analysis. Despite recent advances in air quality monitoring technology, resource and technical constraints often impose limitations on the availability of a sufficient number of ambient concentration measurements for performing environmental risk analysis. Therefore, sample size limitations, representativeness of data, and uncertainties in the estimated annual mean concentration must be examined before performing quantitative risk analysis. In this paper, we discuss several factors that need to be considered in designing field-sampling programs for toxic air contaminants and in verifying compliance with environmental regulations. Specifically, we examine the behavior of SO2, TSP, and CO data as surrogates for toxic air contaminants and as examples of point source, area source, and line source-dominated pollutants, respectively, from the standpoint of sampling design. We demonstrate the use of bootstrap resampling method and normal theory in estimating the annual mean concentration and its 95% confidence bounds from limited sampling data, and illustrate the application of operating characteristic (OC) curves to determine optimum sample size and other sampling strategies. We also outline a statistical procedure, based on a one-sided t-test, that utilizes the sampled concentration data for evaluating whether a sampling site is compliance with relevant ambient guideline concentrations for toxic air contaminants.  相似文献   

10.
This paper presents a new approach to estimation and inference in panel data models with a general multifactor error structure. The unobserved factors and the individual‐specific errors are allowed to follow arbitrary stationary processes, and the number of unobserved factors need not be estimated. The basic idea is to filter the individual‐specific regressors by means of cross‐section averages such that asymptotically as the cross‐section dimension (N) tends to infinity, the differential effects of unobserved common factors are eliminated. The estimation procedure has the advantage that it can be computed by least squares applied to auxiliary regressions where the observed regressors are augmented with cross‐sectional averages of the dependent variable and the individual‐specific regressors. A number of estimators (referred to as common correlated effects (CCE) estimators) are proposed and their asymptotic distributions are derived. The small sample properties of mean group and pooled CCE estimators are investigated by Monte Carlo experiments, showing that the CCE estimators have satisfactory small sample properties even under a substantial degree of heterogeneity and dynamics, and for relatively small values of N and T.  相似文献   

11.
Facility location and vehicle routing are two important logistical problems closely interrelated in many real-world applications where locating facilities and determining the associated multi-stop vehicle routes are required simultaneously. Previous research has found that using the classical facility location models on these location-routing problems (LRPs) may lead to suboptimal solutions. We propose an approximate approach for the LRPs, which first generates and improves feasible location/allocation schemes with the associated multi-stop routing costs approximated using some route length estimators. We then design the minimum-cost routes based on the location/allocation results. We review two estimators that can provide accurate approximations to the multi-stop route distances; define the uncapacitated location-capacitated routing problem; and evaluate several heuristic procedures for approximately solving the problem. Computational results show that when vehicle capacities are not too restrictive, the sequential procedures that incorporate the two robust route length estimators can produce good solutions to practical-sized problems with a reasonable amount of computational efforts.  相似文献   

12.
Models and Bounds for Two-Dimensional Level Packing Problems   总被引:2,自引:0,他引:2  
We consider two-dimensional bin packing and strip packing problems where the items have to be packed by levels. We introduce new mathematical models involving a polynomial number of variables and constraints, and show that their LP relaxations dominate the standard area relaxations. We then propose new (combinatorial) bounds that can be computed in O(nlog n) time. We show that they dominate the other bounds, and establish their absolute worst-case behavior. The quality of models and bounds is evaluated through extensive computational experiments.  相似文献   

13.
JR King 《Omega》1980,8(6):655-660
The paper examines how the computational efficiency of tree search applied to job-shop scheduling problems may be improved by a statistical method which provides both a bound and stopping rule for the search process. The Weibull distribution is used as the limiting form of the frequency distribution of the smallest members of samples of feasible job-shop schedules. Estimates of the optimal solution are obtained by calculating the most likely value of the Weibull location parameter, allowing a tight bracket for the optimal makespan value to be established. Estimates of the optimal solution of sub-problems with fixed partial sequence are proposed as approximate lower bounds for a ‘best-bound’ search strategy.  相似文献   

14.
AB Jack  RLW Alpine 《Omega》1980,8(6):681-689
What is the optimum size of a profession and how should it be determined? If norms about working standards exist and if it can be assumed that its geographical distribution and organisation are optimal, then man-power planning can be reduced to an arithmetical exercise; and the ideal number of places offered on qualifying courses in Colleges and Universities will be determined by pass-rates. However, in most cases, the problems are more complex. A proper concern for professional freedom leads society to tolerate wide variations in professional behaviour and working practice. One aim of policy, whether developed by a Government department or by a professional association or both, may be to promote efficiency, but not at the expense of individual discretion. In such circumstances, working norms do not exist. If, in addition, there is little hard information about the extent of part-time working, actual working practices and so on, it is difficult at first to see how to decide the future size of the profession. The aim of this paper is to illustrate how a simulation exercise combined with a sensitivity analysis was able to contribute to the solution of this problem in the case of one profession, that of opticians. It is hoped that the approach can be adapted to deal with similar problems in other professions who sell their services directly to the public.  相似文献   

15.
In a previous work we proposed a variable fixing heuristics for the 0-1 Multidimensional knapsack problem (01MDK). This approach uses fractional optima calculated in hyperplanes which contain the binary optimum. This algorithm obtained best lower bounds on the OR-Library benchmarks. Although it is very attractive in terms of results, this method does not prove the optimality of the solutions found and may fix variables to a non-optimal value. In this paper, we propose an implicit enumeration based on a reduced costs analysis which tends to fix non-basic variables to their exact values. The combination of two specific constraint propagations based on reduced costs and an efficient enumeration framework enable us to fix variables on the one hand and to prune significantly the search tree on the other hand. Experimentally, our work provides two main contributions: (1) we obtain several new optimal solutions on hard instances of the OR-Library and (2) we reduce the bounds of the number of items at the optimum on several harder instances.  相似文献   

16.
This paper demonstrates that the problem of optimizing the work output of an individual employee, subject to a set of fixed-duration rest periods over a continuous time horizon, can be modeled with a quadratic programming formulation. Exogenous determination of break period durations may result from union contracts, management policies, or necessary lower bounds on break time due to transportation or other requirements. Although the new formulation is more complex than that required for previous models, an efficient solution procedure is presented which avoids the complexities of nonlinear programming. The number of iterations required for convergence is bounded from above by the number of rest periods to be scheduled and all computations are simple. An illustration of the procedure demonstrates that it may be effectively used without computer support. While this research was motivated by the fact that previous work-rest models were not applicable to work horizons which contained a fixed-duration meal break, the procedure can be used to obtain optimal placement for a set of fixed-duration rest periods which may or may not include a meal break. The characteristics of optimal strategies for a set of fixed-duration breaks proofs are shown to have implications for the design of shifts which are potentially more attractive and productive than shifts based upon typical practice. These characteristics are also compared to the general characteristics of optimal policies associated with endogenously determined break durations. This comparison provides insight into the possible consequences of exogenous determination of break durations. This paper concludes with suggestions for additional research.  相似文献   

17.
We present the Integrated Preference Functional (IPF) for comparing the quality of proposed sets of near‐pareto‐optimal solutions to bi‐criteria optimization problems. Evaluating the quality of such solution sets is one of the key issues in developing and comparing heuristics for multiple objective combinatorial optimization problems. The IPF is a set functional that, given a weight density function provided by a decision maker and a discrete set of solutions for a particular problem, assigns a numerical value to that solution set. This value can be used to compare the quality of different sets of solutions, and therefore provides a robust, quantitative approach for comparing different heuristic, a posteriori solution procedures for difficult multiple objective optimization problems. We provide specific examples of decision maker preference functions and illustrate the calculation of the resulting IPF for specific solution sets and a simple family of combined objectives.  相似文献   

18.
In this paper we consider the two-stage stochastic mixed-integer linear programming problem with recourse, which we call the RP problem. A common way to approximate the RP problem, which is usually formulated in terms of scenarios, is to formulate the so-called Expected Value (EV) problem, which only considers the expectation of the random parameters of the RP problem. In this paper we introduce the Conditional Scenario (CS) problem which represents a midpoint between the RP and the EV problems regarding computational tractability and ability to deal with uncertainty. In the theoretical section we have analyzed some useful bounds related to the RP, EV and CS problems. In the numerical example here presented, the CS problem has outperformed both the EV problem in terms of solution quality, and the RP problem with the same number of scenarios as in the CS problem, in terms of solution time.  相似文献   

19.
利用博弈论方法建立了政府在给予BOT电厂项目购买保证时,应如何确定最优保证购买电量,以及项目公司在政府给予的保证购买电量基础上,应如何确定最优自销电量的决策模型,求得了该完全信息动态博弈模型的精炼纳什均衡解,并根据该均衡解的性质,探讨了目前政府确定保证购买电量的思路所存在的问题,分析了保证购买价格对政府的保证购买电量决策和项目公司的自销电量决策产生的影响.最后,通过算例验证了模型的可行性和对均衡解性质所作分析的正确性.  相似文献   

20.
In this paper, we consider optimal portfolio selection with no short sales and with upper bounds for individual securities. The solution is reached by directy revising the optimal portfolio without upper bounds. Specifically, our analysis is based on the single-index model, as well as the general multi-index model that provides the return generating process for securities in the arbitrage pricing theory. As demonstrated in a simulation study, the proposed algorithm for optimal portfolio selection usually requires very few iterations. Also, since our approach is developed using intuitive reasoning and simple linear algebra, we are able to provide direct and intuitive justifications for the resulting portfolio choice. Therefore this paper should be of interest to both finance academics and practitioners in portfolio management.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号