首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In dynamic discrete choice analysis, controlling for unobserved heterogeneity is an important issue, and finite mixture models provide flexible ways to account for it. This paper studies nonparametric identifiability of type probabilities and type‐specific component distributions in finite mixture models of dynamic discrete choices. We derive sufficient conditions for nonparametric identification for various finite mixture models of dynamic discrete choices used in applied work under different assumptions on the Markov property, stationarity, and type‐invariance in the transition process. Three elements emerge as the important determinants of identification: the time‐dimension of panel data, the number of values the covariates can take, and the heterogeneity of the response of different types to changes in the covariates. For example, in a simple case where the transition function is type‐invariant, a time‐dimension of T = 3 is sufficient for identification, provided that the number of values the covariates can take is no smaller than the number of types and that the changes in the covariates induce sufficiently heterogeneous variations in the choice probabilities across types. Identification is achieved even when state dependence is present if a model is stationary first‐order Markovian and the panel has a moderate time‐dimension (T 6).  相似文献   

2.
Incidents can be defined as low-probability, high-consequence events and lesser events of the same type. Lack of data on extremely large incidents makes it difficult to determine distributions of incident size that reflect such disasters, even though they represent the great majority of total losses. If the form of the incident size distribution can be determined, then predictive Bayesian methods can be used to assess incident risks from limited available information. Moreover, incident size distributions have generally been observed to have scale invariant, or power law, distributions over broad ranges. Scale invariance in the distributions of sizes of outcomes of complex dynamical systems has been explained based on mechanistic models of natural and built systems, such as models of self-organized criticality. In this article, scale invariance is shown to result also as the maximum Shannon entropy distribution of incident sizes arising as the product of arbitrary functions of cause sizes. Entropy is shown by simulation and derivation to be maximized as a result of dependence, diversity, abundance, and entropy of multiplicative cause sizes. The result represents an information-theoretic explanation of invariance, parallel to those of mechanistic models. For example, distributions of incident size resulting from 30 partially dependent causes are shown to be scale invariant over several orders of magnitude. Empirical validation of power law distributions of incident size is reviewed, and the Pareto (power law) distribution is validated against oil spill, hurricane, and insurance data. The applicability of the Pareto distribution, in particular, for assessment of total losses over a planning period is discussed. Results justify the use of an analytical, predictive Bayesian version of the Pareto distribution, derived previously, to assess incident risk from available data.  相似文献   

3.
Queueing models can usefully represent production systems experiencing congestion due to irregular flows, but exact analyses of these queueing models can be difficult. Thus it is natural to seek relatively simple approximations that are suitably accurate for engineering purposes. Here approximations for a basic queueing model are developed and evaluated. The model is the GI/G/m queue, which has m identical servers in parallel, unlimited waiting room, and the first-come first-served queue discipline, with service and interarrival times coming from independent sequences of independent and identically distributed random variables with general distributions. The approximations depend on the general interarrival-time and service-time distributions only through their first two moments. The main focus is on the expected waiting time and the probability of having to wait before beginning service, but approximations are also developed for other congestion measures, including the entire distributions of waiting time, queue-length and number in system. These relatively simple approximations are useful supplements to algorithms for computing the exact values that have been developed in recent years. The simple approximations can serve as starting points for developing approximations for more complicated systems for which exact solutions are not yet available. These approximations are especially useful for incorporating GI/G/m models in larger models, such as queueing networks, wherein the approximations can be components of rapid modeling tools.  相似文献   

4.
This paper extends two directional distance function models, the Multi-directional Efficiency Analysis (MEA) Model and the Range Directional Model (RDM), in order to account for any type of technical inefficiency, i.e. both directional and non-directional inefficiencies. We first focus on the variable returns to scale (VRS) case, because both VRS-MEA and RDM are translation invariant models, which mean that both models are able to deal with negative data. Our main result is the definition of a new comprehensive efficiency measure which is units invariant and translation invariant and covers both models. Secondly, we introduce the RDM model under constant returns to scale (CRS) together with a new comprehensive efficiency measure.  相似文献   

5.
This paper introduces a new methodology ensuring units invariant slack selection in radial DEA models and incorporating the slacks into an overall efficiency score. The CCR and BCC models are units invariant in their radial component, but not in their slack component, thus changing the units of measurement of one or more variables can change the models' solution. The proposed Full Proportional Slack (FPS) methodology improves the slack selections of the CCR and BCC models by producing slack selections that (a) are units invariant, thus producing fully units invariant models, (b) maximize the relative improvements represented by the slacks, and not their values, and (c) measure the full slacks that need to be removed from their corresponding variables. The FPS methodology is a fully oriented methodology first maximizing the improvements in the variables on the side of the orientation of the model. The Proportional Slack Adjusted (PSA) methodology incorporates the FPS slacks into an overall efficiency score, making it easier to interpret and use the results. The FPS and PSA methodologies are illustrated using an input oriented VRS Loan Quality DEA model with data from the retail branch network of one of Canada's largest banks.  相似文献   

6.
We consider a dynamic pricing problem that involves selling a given inventory of a single product over a short, two‐period selling season. There is insufficient time to replenish inventory during this season, hence sales are made entirely from inventory. The demand for the product is a stochastic, nonincreasing function of price. We assume interval uncertainty for demand, that is, knowledge of upper and lower bounds but not a probability distribution, with no correlation between the two periods. We minimize the maximum total regret over the two periods that results from the pricing decisions. We consider a dynamic model where the decision maker chooses the price for each period contingent on the remaining inventory at the beginning of the period, and a static model where the decision maker chooses the prices for both periods at the beginning of the first period. Both models can be solved by a polynomial time algorithm that solves systems of linear inequalities. Our computational study demonstrates that the prices generated by both our models are insensitive to errors in estimating the demand intervals. Our dynamic model outperforms our static model and two classical approaches that do not use demand probability distributions, when evaluated by maximum regret, average relative regret, variability, and risk measures. Further, our dynamic model generates a total expected revenue which closely approximates that of a maximum expected revenue approach which requires demand probability distributions.  相似文献   

7.
We present a flexible and scalable method for computing global solutions of high‐dimensional stochastic dynamic models. Within a time iteration or value function iteration setup, we interpolate functions using an adaptive sparse grid algorithm. With increasing dimensions, sparse grids grow much more slowly than standard tensor product grids. Moreover, adaptivity adds a second layer of sparsity, as grid points are added only where they are most needed, for instance, in regions with steep gradients or at nondifferentiabilities. To further speed up the solution process, our implementation is fully hybrid parallel, combining distributed and shared memory parallelization paradigms, and thus permits an efficient use of high‐performance computing architectures. To demonstrate the broad applicability of our method, we solve two very different types of dynamic models: first, high‐dimensional international real business cycle models with capital adjustment costs and irreversible investment; second, multiproduct menu‐cost models with temporary sales and economies of scope in price setting.  相似文献   

8.
For simple inventory models with linear costs and stochastic demands, the technique of incremental analysis is applied to the problem of determining both the optimum number of units to stock and the associated expected profit. The cases where there are shortage costs and where reordering is possible are covered. Sensitivity analysis of optimum solutions is shown to be useful and straightforward. Problems involving cost minimization, rather than profit maximization, are discussed. The emphasis is on discrete probability distributions of demand, but the extensions to continuous probability distributions are clearly indicated.  相似文献   

9.
期货交易的高杠杆率意味着期货市场的高风险特征,而能源市场因其特殊的战略意义一直以来备受关注,因而对能源期货市场的风险测度对投资者和监管者都极其重要。本文对上海燃油期货构建了四个反映不同交割期限的连续价格序列,基于不同的金融市场典型事实分别运用GARCH、GJR、FIGARCH三个模型对波动率建模,并假设条件收益分别服从正态、学生t、有偏学生t(skst)分布进行动态风险价值(VaR)测度,然后运用严格的似然比(LR)检验和动态分位数回归(DQR)检验对风险测度的可靠性进行后验分析(Backtesting),尝试从中提取出在风险管理中最有应用价值的典型事实。研究发现:(1)基于skst分布的波动模型的动态风险测度准确性明显优于其他分布下的相同模型;(2)基于杠杆效应的GJR模型和基于长记忆性的FIGARCH模型并没有表现出比普通GARCH模型更高的精度;(3)远期合约的市场平均收益更高,风险测度比近期合约更准确。  相似文献   

10.
This paper develops a simple approximation method for computing equilibrium portfolios in dynamic general equilibrium open economy macro‐models. The method is widely applicable, simple to implement, and gives analytical solutions for equilibrium portfolio positions in any combination or types of asset. It can be used in models with any number of assets, whether markets are complete or incomplete, and can be applied to stochastic dynamic general equilibrium models of any dimension, so long as the model is amenable to a solution using standard approximation methods. We first illustrate the approach using a simple two‐asset endowment economy model, and then show how the results extend to the case of any number of assets and general economic structure.  相似文献   

11.
Most inventory and production planning models in the academic literature treat lead times either as constants or random variables with known distributions outside of management control. However, a number of recent articles in the popular press have argued that reducing lead times is a dominant issue in manufacturing strategy. The benefits of reducing customer lead times that are frequently cited include increased customer demand, improved quality, reduced unit cost, lower carrying cost, shorter forecast horizon, less safety stock inventory, and better market position. Although the costs of reducing lead times in the long term may be relatively insignificant compared with the benefits, in the short term these costs can have a significant impact on the profitability of a firm. This article develops a conceptual framework within which the costs and benefits of lead time reduction can be compared. Mathematical models for optimal lead time reduction are developed within this framework. The solutions to these models provide methods for calculating optimal lead times, which can be applied in practice. Sensitivity analysis of the optimal solutions provides insight into the structure of these solutions.  相似文献   

12.
Traditional approaches in inventory control first estimate the demand distribution among a predefined family of distributions based on data fitting of historical demand observations, and then optimize the inventory control using the estimated distributions. These approaches often lead to fragile solutions whenever the preselected family of distributions was inadequate. In this article, we propose a minimax robust model that integrates data fitting and inventory optimization for the single‐item multi‐period periodic review stochastic lot‐sizing problem. In contrast with the standard assumption of given distributions, we assume that histograms are part of the input. The robust model generalizes the Bayesian model, and it can be interpreted as minimizing history‐dependent risk measures. We prove that the optimal inventory control policies of the robust model share the same structure as the traditional stochastic dynamic programming counterpart. In particular, we analyze the robust model based on the chi‐square goodness‐of‐fit test. If demand samples are obtained from a known distribution, the robust model converges to the stochastic model with true distribution under generous conditions. Its effectiveness is also validated by numerical experiments.  相似文献   

13.
We propose a novel methodology for evaluating the accuracy of numerical solutions to dynamic economic models. It consists in constructing a lower bound on the size of approximation errors. A small lower bound on errors is a necessary condition for accuracy: If a lower error bound is unacceptably large, then the actual approximation errors are even larger, and hence, the approximation is inaccurate. Our lower‐bound error analysis is complementary to the conventional upper‐error (worst‐case) bound analysis, which provides a sufficient condition for accuracy. As an illustration of our methodology, we assess approximation in the first‐ and second‐order perturbation solutions for two stylized models: a neoclassical growth model and a new Keynesian model. The errors are small for the former model but unacceptably large for the latter model under some empirically relevant parameterizations.  相似文献   

14.

We study the optimal flow control for a manufacturing system subject to random failures and repairs. In most previous work, it has been proved that, for constant demand rates and exponential failure and repair times distributions of machines, the hedging point policy is optimal. The aim of this study is to extend the hedging point policy to non-exponential failure and repair times distributions and random demand rates models. The performance measure is the cost related to the inventory and back order penalties. We find that the structure of the hedging point policy can be parametrized by a single factor representing the critical stock level or threshold. With the corresponding hedging point policy, simulation experiments are used to construct input-output data from which an estimation of the incurred cost function is obtained through a regression analysis. The best parameter value of the related hedging point policy is derived from a minimum search of the obtained cost function. The extended hedging point policy is validated and shown to be quite effective. We find that the hedging point policy is also applicable to a wide variety of complex problems (i.e. non-exponential failure and repair times distributions and random demand rates), where analytical solutions may not be easily obtained.  相似文献   

15.
This paper applies some general concepts in decision theory to a linear panel data model. A simple version of the model is an autoregression with a separate intercept for each unit in the cross section, with errors that are independent and identically distributed with a normal distribution. There is a parameter of interest γ and a nuisance parameter τ, a N×K matrix, where N is the cross‐section sample size. The focus is on dealing with the incidental parameters problem created by a potentially high‐dimension nuisance parameter. We adopt a “fixed‐effects” approach that seeks to protect against any sequence of incidental parameters. We transform τ to (δ, ρ, ω), where δ is a J×K matrix of coefficients from the least‐squares projection of τ on a N×J matrix x of strictly exogenous variables, ρ is a K×K symmetric, positive semidefinite matrix obtained from the residual sums of squares and cross‐products in the projection of τ on x, and ω is a (NJ) ×K matrix whose columns are orthogonal and have unit length. The model is invariant under the actions of a group on the sample space and the parameter space, and we find a maximal invariant statistic. The distribution of the maximal invariant statistic does not depend upon ω. There is a unique invariant distribution for ω. We use this invariant distribution as a prior distribution to obtain an integrated likelihood function. It depends upon the observation only through the maximal invariant statistic. We use the maximal invariant statistic to construct a marginal likelihood function, so we can eliminate ω by integration with respect to the invariant prior distribution or by working with the marginal likelihood function. The two approaches coincide. Decision rules based on the invariant distribution for ω have a minimax property. Given a loss function that does not depend upon ω and given a prior distribution for (γ, δ, ρ), we show how to minimize the average—with respect to the prior distribution for (γ, δ, ρ)—of the maximum risk, where the maximum is with respect to ω. There is a family of prior distributions for (δ, ρ) that leads to a simple closed form for the integrated likelihood function. This integrated likelihood function coincides with the likelihood function for a normal, correlated random‐effects model. Under random sampling, the corresponding quasi maximum likelihood estimator is consistent for γ as N→∞, with a standard limiting distribution. The limit results do not require normality or homoskedasticity (conditional on x) assumptions.  相似文献   

16.
Achieving minimum staffing costs, maximum employee satisfaction with their assigned schedules, and acceptable levels of service are important but potentially conflicting objectives when scheduling service employees. Existing employee scheduling models, such as tour scheduling or general employee scheduling, address at most two of these criteria. This paper describes a heuristic to improve tour scheduling solutions provided by other procedures, and generate a set of equivalent cost feasible alternatives. These alternatives allow managers to identify solutions with attractive secondary characteristics, such as overall employee satisfaction with their assigned tours or consistent employee workloads and customer response times. Tests with both full-time and mixed work force problems reveal the method improves most nonoptimal initial heuristic solutions. Many of the alternatives generated had more even distributions of surplus staff than the initial solutions, yielding more consistent customer response times and employee workloads. The likelihood of satisfying employee scheduling preferences may also be increased since each alternative provides a different deployment of employees among the available schedules.  相似文献   

17.
ARCH and GARCH models directly address the dependency of conditional second moments, and have proved particularly valuable in modelling processes where a relatively large degree of fluctuation is present. These include financial time series, which can be particularly heavy tailed. However, little is known about properties of ARCH or GARCH models in the heavy–tailed setting, and no methods are available for approximating the distributions of parameter estimators there. In this paper we show that, for heavy–tailed errors, the asymptotic distributions of quasi–maximum likelihood parameter estimators in ARCH and GARCH models are nonnormal, and are particularly difficult to estimate directly using standard parametric methods. Standard bootstrap methods also fail to produce consistent estimators. To overcome these problems we develop percentile–t, subsample bootstrap approximations to estimator distributions. Studentizing is employed to approximate scale, and the subsample bootstrap is used to estimate shape. The good performance of this approach is demonstrated both theoretically and numerically.  相似文献   

18.
本文在对经典的和修正的Levy tempered stable分布进行研究的基础上,结合现实中金融资产收益分布的实际特征,分析Levy tempered stable分布在构建模拟金融资产价格过程的Levy Jump模型的优势。由于这类分布的概率密度函数不存在解析式,直接应用传统MLE方法进行参数估计存在困难。为此,根据特征函数与概率密度函数的等价关系,本文建立基于特征函数(CF)具有连续矩条件的GMM(简称CF-CGMM)的Levy tempered Stable分布参数估计方法。同时,利用恒生指数、上证指数、标准普尔500指数数据对以上分布和参数估计方法进行实证研究,并根据参数计算结果和统计假设检验,对不同Levy tempered Stable分布的拟和优度进行检验和比较。本文也在参数估计和统计检验工作的基础上,根据Levy tempered Stable分布模型中不同参数的含义,结合实证计算的结果,对恒生指数、上证指数、标准普尔500指数价格运动特征给出符合现实的解释。  相似文献   

19.
This paper develops theoretical foundations for an error analysis of approximate equilibria in dynamic stochastic general equilibrium models with heterogeneous agents and incomplete financial markets. While there are several algorithms that compute prices and allocations for which agents' first‐order conditions are approximately satisfied (“approximate equilibria”), there are few results on how to interpret the errors in these candidate solutions and how to relate the computed allocations and prices to exact equilibrium allocations and prices. We give a simple example to illustrate that approximate equilibria might be very far from exact equilibria. We then interpret approximate equilibria as equilibria for close‐by economies; that is, for economies with close‐by individual endowments and preferences. We present an error analysis for two models that are commonly used in applications, an overlapping generations (OLG) model with stochastic production and an asset pricing model with infinitely lived agents. We provide sufficient conditions that ensure that approximate equilibria are close to exact equilibria of close‐by economies. Numerical examples illustrate the analysis.  相似文献   

20.
Pandu R Tadikamalla 《Omega》1984,12(6):575-581
Several distributions have been used for approximating the lead time demand distribution in inventory systems. We compare five distributions, the normal, the logistic, the lognormal, the gamma and the Weibull for obtaining the expected number of back orders, the reorder levels to have a given protection and the optimal order quantity, reorder levels in continuous review models of (Q, r) type. The normal and the logistic distributions are inadequate to represent the situations where the coefficient of variation (the ratio of the standard deviation to the mean) of the lead time demand distribution is large. The lognormal, the gamma and the Weibull distributions are versatile and adequate; however the lognormal seems to be a viable candidate because of its computational simplicity.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号