首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Ori Davidov  Chang Yu 《Statistics》2013,47(2):163-173
We provide a method for estimating the sample mean of a continuous outcome in a stratified population using a double sampling scheme. The stratified sample mean is a weighted average of stratum specific means. It is assumed that the fallible and true outcome data are related by a simple linear regression model in each stratum. The optimal stratified double sampling plan, i.e. , the double sampling plan that minimizes the cost of sampling for fixed variances, or alternatively, minimizes the variance for fixed costs, is found and compared to a standard sampling plan. The design parameters are the total sample size and the number of doubly sampled units in each stratum. We show that the optimal double sampling plan is a function of the between-strata and within-strata cost and variance ratios. The efficiency gains, relative to standard sampling plans, under broad set of conditions, are considerable.  相似文献   

2.
An important problem in process adjustment using feedback is how often to sample the process and when and by how much to apply an adjustment. Minimum cost feedback schemes based on simple, but practically interesting, models for disturbances and dynamics have been discussed in several particular cases. The more general situation in which there may be measurement and adjustment errors, deterministic process drift, and costs of taking an observation, of making an adjustment, and of being off target, is considered in this article. Assuming all these costs to be known, a numerical method to minimize the overall expected cost is presented. This numerical method provides the optimal sampling interval, action limits, and amount of adjustment; and the resulting average adjustment interval, mean squared deviation from target, and minimum overall expected cost. When the costs of taking an observation, of making an adjustment, and of being off target are not known, the method can be used to choose a particular scheme by judging the advantages and disadvantages of alternative options considering the mean squared deviation they produce, the frequency with which they require observations to be made, and the resulting overall length of time between adjustments. Computer codes that perform the required computations are provided in the appendices and applied to find optimal adjustment schemes in three real examples of application.  相似文献   

3.
Abstract

This article develops a method to estimate search frictions as well as preference parameters in differentiated product markets. Search costs are nonparametrically identified, which means our method can be used to estimate search costs in differentiated product markets that lack a suitable search cost shifter. We apply our model to the U.S. Medigap insurance market. We find that search costs are substantial: the estimated median cost of searching for an insurer is $30. Using the estimated parameters we find that eliminating search costs could result in price decreases of as much as $71 (or 4.7%), along with increases in average consumer welfare of up to $374.  相似文献   

4.
In a discrete-part manufacturing process, the noise is often described by an IMA(1,1) process and the pure unit delay transfer function is used as the feedback controller to adjust it. The optimal controller for this process is the well-known minimum mean square error (MMSE) controller. The starting level of the IMA(1,1) model is assumed to be on target when it starts. Considering such an impractical assumption, we adopt the starting offset. Since the starting offset is not observable, the MMSE controller does not exist. An alternative to the MMSE controller is the minimum asymptotic mean square error controller, which makes the long-run mean square error minimum.Another concern in this article is the un-stability of the controller, which may produce high adjustment costs and/or may exceed the physical bounds of the process adjustment. These practical barriers will prevent the controller to adjust the process properly. To avoid this dilemma, a resetting design is proposed. That is, the resetting procedure in use of the controller is to adjust the process according to the controller when it remains within the reset limit, and to reset the process, otherwise.The total cost for the manufacturing process is affected by the off-target cost, the adjustment cost, and the reset cost. Proper values for the reset limit are selected to minimize the average cost per reset interval (ACR) considering various process parameters and cost parameters. A time non-homogeneous Markov chain approach is used for calculating the ACR. The effect of adopting the starting offset is also studied here.  相似文献   

5.
In searching for the optimal inventory control policy, the objective is to minimize the expected total costs related, of which the shortage cost is an important element. Due to the difficulty in calculating the indirect cost of the loss of goodwill resulted from the shortage, practitioners and researchers often simply assume a fixed penalty cost on the inventory shortage or switch to the alternative method by assigning a specific customer service level. The development of an appropriate tool for measuring the shortage cost can help a business control the total costs and improve the productivity more effectively. This paper proposes probabilistic measurements of the shortage cost, based on mathematical relationship between the cost and the shortage amount. The derived closed-form estimates of the expected shortage cost value can then be applied to support the determination of the optimal inventory control policy.  相似文献   

6.
The main purposes of this paper are to derive Bayesian acceptance sampling plans regarding the number of defects per unit of product, and to illustrate how to apply the methodology to the paper pulp industry. The sampling plans are obtained following an economic criterion: minimize the expected total cost of quality. It has been assumed that the number of defects per unit of product follows a Poisson distribution with process average 5 , whose prior information is described either for a gamma or for a non- informative distribution. The expected total cost of quality is composed of three independent components: inspection, acceptance and rejection. Both quadratic and step-loss functions have been used to quantify the cost incurred for the acceptance of a lot containing units with defects. Combining the prior information on 5 with the loss functions, four different sampling plans are obtained. When the quadratic-loss function is used, an analytical relation between the optimum settings of the sample size and the acceptance number is derived. The robustness analysis indicates that the sampling plans obtained are robust with respect to the prior distribution of the process average as well as to the misspecification of its mean and variance.  相似文献   

7.
In many completely randomized design experiments, levels of subsampling may be performed on each experimental unit. In such cases the expected mean square error E(MSE) for testing among treatment groups is comprised of variance components analogour to those associated with the primary sampling unit is nested sampling Marcuse (1949) gives a procedure to minimize the cost of obtaining the samples if a desired degree of precision in the E(MSE) is fixed. However, her method gives no consideration to the resulting power of the test for differences among the treatment groups. Our method stipulates that the power, rather than the precision, is fixed at a critical level and the total cost is minimized subject to this constraint.  相似文献   

8.
In this article, we systematically study the optimal truncated group sequential test on binomial proportions. Through analysis of the cost structure, average test cost is introduced as a new optimality criterion. According to the new criterion, the optimal tests on different design parameters including the boundaries, success discriminant value, stage sample vector, stage size, and the maximum sample size are defined. Since the computation time in finding optimal designs by exhaustive search is intolerably long, group sequential sample space sorting method and procedures are developed to find the near-optimal ones. In comparison with the international standard ISO2859-1, the truncated group sequential designs proposed in this article can reduce the average test costs around 20%.  相似文献   

9.
In this article, the expected total costs of three kinds of quality cost functions for the one-sided sequential screening procedure based on the individual misclassification error are obtained, where the expected total cost is the sum of the expected cost of inspection, the expected cost of rejection, and the expected cost of quality. The computational formulas for three kinds of expected total costs are derived when k screening variables are allocated into r stages. The optimal allocation combination is determined based on the criterion of minimum expected total cost. At last, we give one example to illustrate the selection of the optimal allocation combination for the sequential screening procedure.  相似文献   

10.
Previously, we developed a modeling framework which classifies individuals with respect to their length of stay (LOS) in the transient states of a continuous-time Markov model with a single absorbing state; phase-type models are used for each class of the Markov model. We here add costs and obtain results for moments of total costs in (0, t], for an individual, a cohort arriving at time zero and when arrivals are Poisson. Based on stroke patient data from the Belfast City Hospital we use the overall modelling framework to obtain results for total cost in a given time interval.  相似文献   

11.
Summary.  Road safety has recently become a major concern in most modern societies. The identification of sites that are more dangerous than others (black spots) can help in better scheduling road safety policies. This paper proposes a methodology for ranking sites according to their level of hazard. The model is innovative in at least two respects. Firstly, it makes use of all relevant information per accident location, including the total number of accidents and the number of fatalities, as well as the number of slight and serious injuries. Secondly, the model includes the use of a cost function to rank the sites with respect to their total expected cost to society. Bayesian estimation for the model via a Markov chain Monte Carlo approach is proposed. Accident data from 519 intersections in Leuven (Belgium) are used to illustrate the methodology proposed. Furthermore, different cost functions are used to show the effect of the proposed method on the use of different costs per type of injury.  相似文献   

12.
The estimation of incremental cost–effectiveness ratio (ICER) has received increasing attention recently. It is expressed in terms of the ratio of the change in costs of a therapeutic intervention to the change in the effects of the intervention. Despite the intuitive interpretation of ICER as an additional cost per additional benefit unit, it is a challenge to estimate the distribution of a ratio of two stochastically dependent distributions. A vast literature regarding the statistical methods of ICER has developed in the past two decades, but none of these methods provide an unbiased estimator. Here, to obtain the unbiased estimator of the cost–effectiveness ratio (CER), the zero intercept of the bivariate normal regression is assumed. In equal sample sizes, the Iman–Conover algorithm is applied to construct the desired variance–covariance matrix of two random bivariate samples, and the estimation then follows the same approach as CER to obtain the unbiased estimator of ICER. The bootstrapping method with the Iman–Conover algorithm is employed for unequal sample sizes. Simulation experiments are conducted to evaluate the proposed method. The regression-type estimator performs overwhelmingly better than the sample mean estimator in terms of mean squared error in all cases.  相似文献   

13.
The University of California, Los Angeles (UCLA) and University of Toronto (U of T) describe their respective library-driven programs to reduce the cost of course materials for students—the Affordable Course Materials Initiative (ACMI) at UCLA and the Zero-to-Low-Cost-Course Program (ZTLCC) at U of T. With the same goal of reducing costs by leveraging existing library resources, each library approaches that cost reduction differently. UCLA uses financial incentives to help drive faculty participation in reducing costs, whereas U of T works to reduce overlap in licensing costs while encouraging use of these already-paid-for resources. The following provides an overview of each program along with guidelines for implementing a similar program.  相似文献   

14.
《随机性模型》2013,29(1):93-107
We study the optimal control of a production process subject to a deterministicdrift and to random shocks. The process mean is observable at discrete points of time after producing a batch and, at each such point, a decision is made whether to reset the process mean to some initial value or to continue with the production. The objective is to find the initial setting of the process mean and the resetting time that minimizes the expected average cost per unit time. It is shown that the optimal control policy is of a control limit type. An algorithm for finding the optimal control parameters is presented.  相似文献   

15.
In this paper, the expected total costs (ETCs) of three kinds of quality cost functions for the two-sided sequential screening procedure (SQSP) based on the individual misclassification error are obtained, where the ETC is the sum of the expected cost of inspection, the expected cost of rejection and the expected cost of quality. The general formulas for all the desired probabilities and three ETCs when k screening variables are allocated into r-stages are derived. The optimal allocation combination for each ETC is determined based on the criterion of minimum ETC. Finally, we give two examples to illustrate the selection of the optimal allocation combination for the SQSP.  相似文献   

16.
In this paper we suggest cost indices that measure absolute changes in total and marginal production costs between two periods when factor prices change. The class of cost functions that generate equal total and marginal cost indices is characterized. A numerical illustration of the indices is provided using Indian cotton textile industry data.  相似文献   

17.
Identifying cost-effective decisions that can take into account of medical cost and health outcome is an important issue under very limited resources. Analyzing medical costs has been challenged owing to skewness of cost distributions, heterogeneity across samples and censoring. When censoring is due to administrative reasons, the total cost might be related to the survival time since longer survivals are likely to be censored and the corresponding total cost will be censored as well. This paper uses the general linear model for the longitudinal data to model the repeated medical cost data and the weighted estimating equation is used to find more accurate estimates for the parameter. Furthermore, the asymptotic properties for the proposed model are discussed. Simulations are used to evaluate the performance of estimators under various scenarios. Finally, the proposed model is implemented on the data extracted from National Health Insurance database for patients with the colorectal cancer.  相似文献   

18.
王金明 《统计研究》2012,29(4):44-50
本文对我国总需求因素、货币因素和生产成本因素对通货膨胀的影响进行计量研究。通过测算菲利普斯曲线的动态变化,本文认为产出缺口对我国通货膨胀的影响呈现稳定下降的趋势,这说明总需求对我国通货膨胀的拉动效应在减小。本文选择了对通货膨胀具有重要影响的货币因素和产品购进价格因素,利用NBER方法分别计算合成指数,并将得到的合成指数与反映工资成本的指标共同引入扩展的菲利普斯曲线中,模型计算结果表明,货币因素和生产成本对物价具有显著的推动效应。因此,本文认为,在当前我国紧缩的货币政策背景下,产品购进价格尤其是工资成本的上升是通货膨胀率居高不下的决定性原因。  相似文献   

19.
This paper presents an economic life test acceptance sampling plan using item-censored data in a Bayesian situation. It is assumed that failures in a life test are replaced immediately by new ones. A prior distribution is assigned to the mean lifetime θ for the calculation of the expected total cost. Then the optimum plan is chosen to be the one which minimizes the expected total cost. A direct search method and a dual programming method are introduced, with emphasis on the latter. A numerical example is presented to illustrate the procedure. A sensitivity study is included on the effect of a wrong choice of the prior distribution.  相似文献   

20.
Read  Robert  Thomas  Lyn  Washburn  Alan 《Statistics and Computing》2000,10(3):245-252
Consider the random sampling of a discrete population. The observations, as they are collected one by one, are enhanced in that the probability mass associated with each observation is also observed. The goal is to estimate the population mean. Without this extra information about probability mass, the best general purpose estimator is the arithmetic average of the observations, XBAR. The issue is whether or not the extra information can be used to improve on XBAR. This paper examines the issues and offers four new estimators, each with its own strengths and liabilities. Some comparative performances of the four with XBAR are made.The motivating application is a Monte Carlo simulation that proceeds in two stages. The first stage independently samples n characteristics to obtain a configuration of some kind, together with a configuration probability p obtained, if desired, as a product of n individual probabilities. A relatively expensive calculation then determines an output X as a function of the configuration. A random sample of X could simply be averaged to estimate the mean output, but there are possibly more efficient estimators on account of the known configuration probabilities.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号