首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 78 毫秒
1.
A model of a production process, using an unscheduled set-up policy and utilizing fraction-defective control charts to control current production is developed taking into consideration all the costs; namely cost of sampling, cost of not detecting a change in the process, cost of a false indication of change, and the cost of re-adjusting detected changes. The model is based on the concept of the expected time between detection of changes calling for set-ups. It is shown that the combination of unscheduled set-ups and control charts can be utilized in an optimal way if those combinations of sample size, sampling interval and extent of control limits from process average will be used that provide the minimum expected total cost per unit of time. The costs when a production process with unscheduled set-up is controlled by using the appropriate optimal control charts is compared to the cost of a production process using scheduled set-ups at optimum intervals in conjunction with its appropriate control charts. This comparison indicates the criteria for selecting production processes with scheduled set-ups using optimal set-up intervals over unscheduled set-ups. Suggestions are made to evaluate the optimal process set-up strategy and the accompanying decision parameters, for any specific cost data, by use of computer enumeration.  相似文献   

2.
非参数计量经济联立模型的局部线性广义矩估计   总被引:4,自引:0,他引:4  
联立方程模型在经济政策制定、经济结构分析和经济预测方面起重要作用。本文在随机设计(模型中所有变量为随机变量)下,提出了非参数计量经济联立模型的局部线性广义矩估计并利用概率论中大数定理和中心极限定理在内点处研究了它的大样本性质,证明了它的一致性和渐近正态性。它在内点处的收敛速度达到了非参数函数估计的最优收敛速度。  相似文献   

3.
In this paper, the phenomenon of the optimal management of requests of service in general networks is formulated as a control problem for a finite number of multiserver loss queues with Markovian routing. This type of problem may arise in a wide range of fields, e.g., manufacturing industries, storage facilities, computer networks, and communication systems. Using inductive approach of dynamic programming, the optimal admission control can be induced to be the functions of the number of requested service in progress. However, for large-scale network, the computational burden to find optimal control policy may be infeasible due to its involvement of the states for all stations in the networks. Hence, the idea of bottleneck modeling is borrowed to compute the near-optimal admission control policy. We reduced the scale of loss network and decreased the difference between the original and reduced models by making compensation for system parameters. A novel method is proposed in this paper to compute the compensation. Numerical results show that the near-optimal control policy demonstrates close performance to the optimal policy.  相似文献   

4.
跳跃行为是短期利率动态过程的一个重要特征,跳跃-扩散模型能更好的描述短期利率行为。本文应用非参数门限估计对短期利率的跳跃-扩散模型进行了仿真实验和实证分析。仿真实验表明,门限估计能有效消除传统非参数估计对跳跃-扩散模型的估计偏差,估计参数具有无偏性。对上海银行间同业拆借利率(Shibor)的实证分析发现,门限估计有效探测到了短期Shibor的跳跃行为,且这种跳跃行为和宏微观的经济金融现象相一致。最后和扩散模型的实证比较得到,基于门限估计的跳跃-扩散模型对短期利率分布的偏度和峰度的拟合能力更优。  相似文献   

5.
In an effort to improve the small sample properties of generalized method of moments (GMM) estimators, a number of alternative estimators have been suggested. These include empirical likelihood (EL), continuous updating, and exponential tilting estimators. We show that these estimators share a common structure, being members of a class of generalized empirical likelihood (GEL) estimators. We use this structure to compare their higher order asymptotic properties. We find that GEL has no asymptotic bias due to correlation of the moment functions with their Jacobian, eliminating an important source of bias for GMM in models with endogeneity. We also find that EL has no asymptotic bias from estimating the optimal weight matrix, eliminating a further important source of bias for GMM in panel data models. We give bias corrected GMM and GEL estimators. We also show that bias corrected EL inherits the higher order property of maximum likelihood, that it is higher order asymptotically efficient relative to the other bias corrected estimators.  相似文献   

6.
We examine challenges to estimation and inference when the objects of interest are nondifferentiable functionals of the underlying data distribution. This situation arises in a number of applications of bounds analysis and moment inequality models, and in recent work on estimating optimal dynamic treatment regimes. Drawing on earlier work relating differentiability to the existence of unbiased and regular estimators, we show that if the target object is not differentiable in the parameters of the data distribution, there exist no estimator sequences that are locally asymptotically unbiased or α‐quantile unbiased. This places strong limits on estimators, bias correction methods, and inference procedures, and provides motivation for considering other criteria for evaluating estimators and inference procedures, such as local asymptotic minimaxity and one‐sided quantile unbiasedness.  相似文献   

7.
Non-normality has a significant effect on the performance of control charts for averages. The design considerations for a control chart for averages must include recognition of the degree of non-normality of the underlying data. The performance of a control chart may be judged on its ability to correctly identify the probabilities of assignable causes of variation and chance causes of variation in a process. This paper examines the effects of non-normality, as measured by skewness and kurtosis, on the performance, and hence the design, of control charts for averages and provides an alternative method of designing charts for averages of data with non-normal distributions.  相似文献   

8.
We develop a new specification test for IV estimators adopting a particular second order approximation of Bekker. The new specification test compares the difference of the forward (conventional) 2SLS estimator of the coefficient of the right‐hand side endogenous variable with the reverse 2SLS estimator of the same unknown parameter when the normalization is changed. Under the null hypothesis that conventional first order asymptotics provide a reliable guide to inference, the two estimates should be very similar. Our test sees whether the resulting difference in the two estimates satisfies the results of second order asymptotic theory. Essentially the same idea is applied to develop another new specification test using second‐order unbiased estimators of the type first proposed by Nagar. If the forward and reverse Nagar‐type estimators are not significantly different we recommend estimation by LIML, which we demonstrate is the optimal linear combination of the Nagar‐type estimators (to second order). We also demonstrate the high degree of similarity for k‐class estimators between the approach of Bekker and the Edgeworth expansion approach of Rothenberg. An empirical example and Monte Carlo evidence demonstrate the operation of the new specification test.  相似文献   

9.
D Sculli  KM Woo 《Omega》1982,10(6):679-687
This paper presents the results of a simulation approach to the design of economic attribute control charts. The approach taken differs from the classical studies reported in the literature in two respects. First, the often made assumption that the out-of-control state of a process will remain so until detection is relaxed. This in effect allows the average proportion of defectives to deteriorate further before the out-of-control state is detected. Second, there is no requirement for detailed information on the exact process behaviour in the out-of-control state. It is assumed that a multitude of out-of-control causes exist and that the proportion defective is equally likely to be within a specified range of values. The relaxation of these two assumptions makes the model realistic and gives it wide practical applications in the control of a manufacturing process. The simulation computer programs developed are sufficiently compact to run on the low cost microcomputers.  相似文献   

10.
Estimation of benchmark doses (BMDs) in quantitative risk assessment traditionally is based upon parametric dose‐response modeling. It is a well‐known concern, however, that if the chosen parametric model is uncertain and/or misspecified, inaccurate and possibly unsafe low‐dose inferences can result. We describe a nonparametric approach for estimating BMDs with quantal‐response data based on an isotonic regression method, and also study use of corresponding, nonparametric, bootstrap‐based confidence limits for the BMD. We explore the confidence limits’ small‐sample properties via a simulation study, and illustrate the calculations with an example from cancer risk assessment. It is seen that this nonparametric approach can provide a useful alternative for BMD estimation when faced with the problem of parametric model uncertainty.  相似文献   

11.
The problem of estimating steady state absorption probabilities for first order stationary Markov chains having a finite state space is examined. As model parameters, these probabilities are analytic functions of transition probabilities Q and R, and they can be represented as P= (I-Q)-1R. Estimators may be obtained by replacing the transition probabilities by their maximum likelihood estimators Q and Ř under multinomial theory. Using large sample multivariate normal theory, one can derive the asymptotic distribution of these estimators and can obtain large sample confidence intervals. Finally, an application related to estimating loss reserves for an installment loan portfolio assumed to satisfy a Markov chain is discussed.  相似文献   

12.
Weng Kee Wong 《Risk analysis》2011,31(12):1949-1960
Hormesis is a widely observed phenomenon in many branches of life sciences, ranging from toxicology studies to agronomy, with obvious public health and risk assessment implications. We address optimal experimental design strategies for determining the presence of hormesis in a controlled environment using the recently proposed Hunt‐Bowman model. We propose alternative models that have an implicit hormetic threshold, discuss their advantages over current models, and construct and study properties of optimal designs for (i) estimating model parameters, (ii) estimating the threshold dose, and (iii) testing for the presence of hormesis. We also determine maximin optimal designs that maximize the minimum of the design efficiencies when we have multiple design criteria or there is model uncertainty where we have a few plausible models of interest. We apply these optimal design strategies to a teratology study and show that the proposed designs outperform the implemented design by a wide margin for many situations.  相似文献   

13.
A model is developed for evaluating alternative control systems for an ongoing managerial process in terms of expected contribution per unit time under conditions of imperfect information. A variety of process failure distributions and economic characteristics are accepted by the model. An example illustrates the versatility of this approach for comparing alternative systems and for performing sensitivity analysis on process and control system parameters.  相似文献   

14.
Bayesian Monte Carlo (BMC) decision analysis adopts a sampling procedure to estimate likelihoods and distributions of outcomes, and then uses that information to calculate the expected performance of alternative strategies, the value of information, and the value of including uncertainty. These decision analysis outputs are therefore subject to sample error. The standard error of each estimate and its bias, if any, can be estimated by the bootstrap procedure. The bootstrap operates by resampling (with replacement) from the original BMC sample, and redoing the decision analysis. Repeating this procedure yields a distribution of decision analysis outputs. The bootstrap approach to estimating the effect of sample error upon BMC analysis is illustrated with a simple value-of-information calculation along with an analysis of a proposed control structure for Lake Erie. The examples show that the outputs of BMC decision analysis can have high levels of sample error and bias.  相似文献   

15.
We adapt the expectation–maximization algorithm to incorporate unobserved heterogeneity into conditional choice probability (CCP) estimators of dynamic discrete choice problems. The unobserved heterogeneity can be time‐invariant or follow a Markov chain. By developing a class of problems where the difference in future value terms depends on a few conditional choice probabilities, we extend the class of dynamic optimization problems where CCP estimators provide a computationally cheap alternative to full solution methods. Monte Carlo results confirm that our algorithms perform quite well, both in terms of computational time and in the precision of the parameter estimates.  相似文献   

16.
Ilias S. Kevork 《Omega》2010,38(3-4):218-227
The paper considers the classical single-period inventory model, also known as the Newsboy Problem, with the demand normally distributed and fully observed in successive inventory cycles. The extent of applicability of such a model to inventory management depends upon demand estimation. Appropriate estimators for the optimal order quantity and the maximum expected profit are developed. The statistical properties of the two estimators are explored for both small and large samples, analytically and through Monte-Carlo simulations. For small samples, both estimators are biased. The form of distribution of the optimal order quantity estimator depends upon the critical fractile, while the distribution of the maximum expected profit estimator is always left-skewed. Small samples properties of the estimators indicate that, when the critical fractile is set over a half, the optimal order quantity is underestimated and the maximum expected profit is overestimated with probability over 50%, whereas the probability of overestimating both quantities exceeds again 50% when the critical fractile is below a half. For large samples, based on the asymptotic properties of the two estimators, confidence intervals are derived for the corresponding true population values. The validity of confidence intervals using small samples is tested by developing appropriate Monte-Carlo simulations. In small samples, these intervals attain acceptable confidence levels, but with high unit shortage cost, for the case of maximum expected profit, significant reductions in their precision and stability are observed.  相似文献   

17.
加权复合分位数回归方法在动态VaR风险度量中的应用   总被引:1,自引:0,他引:1  
风险价值(VaR)因为简单直观,成为了当今国际上最主流的风险度量方法之一,而基于时间序列自回归(AR)模型来计算无条件风险度量值在实业界有广泛应用。本文基于分位数回归理论对AR模型提出了一个估计方法--加权复合分位数回归(WCQR)估计,该方法可以充分利用多个分位数信息提高参数估计的效率,并且对于不同的分位数回归赋予不同的权重,使得估计更加有效,文中给出了该估计的渐近正态性质。有限样本的数值模拟表明,当残差服从非正态分布时,WCQR估计的的统计性质接近于极大似然估计,而该估计是不需要知道残差分布的,因此,所提出的WCQR估计更加具有竞争力。此方法在预测资产收益的VaR动态风险时有较好的应用,我们将所提出的理论分析了我国九只封闭式基金,实证分析发现,结合WCQR方法求得的VaR风险与用非参数方法求得的VaR风险非常接近,而结合WCQR方法可以计算动态的VaR风险值和预测资产收益的VaR风险值。  相似文献   

18.
The bootstrap is a convenient tool for calculating standard errors of the parameter estimates of complicated econometric models. Unfortunately, the fact that these models are complicated often makes the bootstrap extremely slow or even practically infeasible. This paper proposes an alternative to the bootstrap that relies only on the estimation of one‐dimensional parameters. We introduce the idea in the context of M and GMM estimators. A modification of the approach can be used to estimate the variance of two‐step estimators.  相似文献   

19.
We review the history of statistical process control research from its origins at Bell Laboratories with Shewhart in 1924 up to the present and integrate it with the history of the larger total quality management movement that emerged from these same statistical process control origins. The original research was very philosophical and very practical and is still implemented today. Our view is that the majority of the enormous research literature after Duncan's 1956 seminal paper on optimal design of control charts has had little practical relevance. The research formulations became more mechanical, less philosophical and less practical. We explore the reasons for this and make suggestions for new research directions. We also propose changes in the supporting industry-university relationships to facilitate a program of more relevant research in statistical process control.  相似文献   

20.
The availability of high frequency financial data has generated a series of estimators based on intra‐day data, improving the quality of large areas of financial econometrics. However, estimating the standard error of these estimators is often challenging. The root of the problem is that traditionally, standard errors rely on estimating a theoretically derived asymptotic variance, and often this asymptotic variance involves substantially more complex quantities than the original parameter to be estimated. Standard errors are important: they are used to assess the precision of estimators in the form of confidence intervals, to create “feasible statistics” for testing, to build forecasting models based on, say, daily estimates, and also to optimize the tuning parameters. The contribution of this paper is to provide an alternative and general solution to this problem, which we call Observed Asymptotic Variance. It is a general nonparametric method for assessing asymptotic variance (AVAR). It provides consistent estimators of AVAR for a broad class of integrated parameters Θ = ∫ θt dt, where the spot parameter process θ can be a general semimartingale, with continuous and jump components. The observed AVAR is implemented with the help of a two‐scales method. Its construction works well in the presence of microstructure noise, and when the observation times are irregular or asynchronous in the multivariate case. The methodology is valid for a wide variety of estimators, including the standard ones for variance and covariance, and also for more complex estimators, such as, of leverage effects, high frequency betas, and semivariance.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号