首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 766 毫秒
1.
In this article, the expected total costs of three kinds of quality cost functions for the one-sided sequential screening procedure based on the individual misclassification error are obtained, where the expected total cost is the sum of the expected cost of inspection, the expected cost of rejection, and the expected cost of quality. The computational formulas for three kinds of expected total costs are derived when k screening variables are allocated into r stages. The optimal allocation combination is determined based on the criterion of minimum expected total cost. At last, we give one example to illustrate the selection of the optimal allocation combination for the sequential screening procedure.  相似文献   

2.
ABSTRACT

Confidence intervals for the intraclass correlation coefficient (ρ) are used to determine the optimal allocation of experimental material in one-way random effects models. Designs that produce narrow intervals are preferred since they provide greater precision to estimate ρ. Assuming the total cost and the relative cost of the two stages of sampling are fixed, the authors investigate the number of classes and the number of individuals per class required to minimize the expected length of confidence intervals. We obtain results using asymptotic theory and compare these results to those obtained using exact calculations. The best design depends on the unknown value of ρ. Minimizing the maximum expected length of confidence intervals guards against worst-case scenarios. A good overall recommendation based on asymptotic results is to choose a design having classes of size 2 + √4 + 3r, where r is the relative cost of sampling at the class-level compared to the individual-level. If r = 0, then the overall cost is the sample size and the recommendation reduces to a design having classes of size 4.  相似文献   

3.
Acceptance sampling is a quality assurance tool, which provides a rule for the producer and the consumer to make acceptance or rejection decision about a lot. This paper attempts to develop a more efficient sampling plan, variables repetitive group sampling plan, based on the total loss to the producer and consumer. To design this model, two constraints are considered to satisfy the opposing priorities and requirements of the producer and the consumer by using Acceptable quality level (AQL) and Limiting quality level (LQL) points on operating characteristic (OC) curve. The objective function of this model is constructed based on the total expected loss. In order to illustrate the application of the proposed model, an example is presented. In addition, the effects of process parameters on the optimal solution and the total expected loss are studied by performing a sensitivity analysis. Finally, the efficiency of the proposed model is compared with the variables single sampling plan, the variables double sampling plan and the repetitive group sampling plan of Balamurali and Jun (2006) in terms of average sample number, total expected loss and its difference with ideal OC curve.  相似文献   

4.
Breiman, Friedman, Olshen, and Stone (1984) use a linear combination of prediction risk and tree size as a criterion in search of optimal trees. In this paper we use a linear combination of the above two components and the variable-observation cost as a criterion (C 1) for the same purpose. This paper explicitly represents the relation among nested, pruned subtrees in terms of C 1. Further, the theories in Breiman et al. (1984) concerning the search of optimal trees are generalized.  相似文献   

5.
Various strategies are investigated for selecting one of two medical treatments when patients may be divided into k ≥ 2 categories on the basis of their expected differences in response to the two treatments. The strategies compared are (1) k independent decisions, (2) a single overall decision made on the basis of simple pooling, and (3) the Bayes strategy. The optimal clinical trial size is supplied for each strategy, and conditions delineated under which each of the strategies is to be preferred.  相似文献   

6.
This paper proposes a geometric process warranty model. Assume that a combination policy (W, T) is applied after selling a product, so that a free warranty is offered in [0, W), followed by a pro-rata warranty in [W, T). Assume further the successive operating times (repair times) of the product form a decreasing (increasing) geometric process. The average cost rate of the product to the manufacturer and a consumer can be derived respectively. For exponential distribution case, the explicit formulas of the average cost rate are obtained, and an finite algorithm for determination of an optimal combination policy is suggested.  相似文献   

7.
Applied statistical decision theory has wide applications in decision-making fields of studies, such as economic, business management and industrial managements. In this work, following Pratt et al.’s [Introduction to statistical decision theory. 3rd ed. Cambridge, MA: The MIT Press; 2001] approach, we provide theoretical and practical formulations for the calculations of the key decision-making indices expected value of perfect information and expected value of sample information, whenever the unknown state appears to be the first-order autoregressive (AR) time series parameter that assumes a normal prior distribution. A practical procedure is furnished for calculating the decision-making indices. We treat the finite and infinite state spaces for the linear value functions and the quadratic opportunity losses. Interestingly our investigations on the distribution of the mean of the posterior distribution lead us to a general form for the corresponding statistic and its distribution, discussed by Reeves [The distribution of the maximum likelihood estimator of the parameter in the first-order AR series. Biometrika. 1972;59:387–394], Moschopoulos and Canada [The distribution function of a linear combination of chi-squares. Comput Math Appl. 1984;10:383–386], and Roychowdhury and Bhattacharya [On the performance of estimators of parameter in AR model of order one and optimal prediction under asymmetric loss. Model Assist Stat Appl. 2008;3:225–232].  相似文献   

8.
A minimum cost CUSUM test for an event rate increase when inter-event times are exponentially distributed is presented. Optimal values of the test decision parameters, h and k, are developed from a renewal reward model of the event cycle by combining a non-linear optimization technique with an exact method for determining exponential average run lengths. Test robustness for event cycle parameter estimates and departures from the assumption of exponentially distributed inter-event times are discussed in the context of an injury monitoring scenario. Robustness to positively serially correlated observations emanating from EAR(1) and EMA(1) processes is also examined.  相似文献   

9.
Taguchi (1984,1987) has derived tolerances for subsystems, subcomponents, parts and materials. However, he assumed that the relationship between a higher rank and a lower rank quality characteristic is deterministic. The basic structure of the above tolerance design problem is very similar to that of the screening problem. Tang (1987) proposed three cost models and derived an economic design for the screening problem of “the-bigger-the-better” quality characteristic in which the optimal specification limit ( or tolerance ) for a screening variable ( or a lower rank quality characteristic ) was obtained by minimizing the expected total cost function.Tang considered that the quality cost incurred only when the quality characteristic is out of specification while Taguchi considered that the quality cost incurred whenever the quality characteristic deviates from its nominal value. In this paper, a probabilistic relationship, namely, a bivariate normal distribution between the above two qualy characteristics as in a screening problem as well as Taguchi's quadratic loss function are considered together to develop a closed form solution of the tolerance design for a subsystem.  相似文献   

10.
The poor performance of the Wald method for constructing confidence intervals (CIs) for a binomial proportion has been demonstrated in a vast literature. The related problem of sample size determination needs to be updated and comparative studies are essential to understanding the performance of alternative methods. In this paper, the sample size is obtained for the Clopper–Pearson, Bayesian (Uniform and Jeffreys priors), Wilson, Agresti–Coull, Anscombe, and Wald methods. Two two-step procedures are used: one based on the expected length (EL) of the CI and another one on its first-order approximation. In the first step, all possible solutions that satisfy the optimal criterion are obtained. In the second step, a single solution is proposed according to a new criterion (e.g. highest coverage probability (CP)). In practice, it is expected a sample size reduction, therefore, we explore the behavior of the methods admitting 30% and 50% of losses. For all the methods, the ELs are inflated, as expected, but the coverage probabilities remain close to the original target (with few exceptions). It is not easy to suggest a method that is optimal throughout the range (0, 1) for p. Depending on whether the goal is to achieve CP approximately or above the nominal level different recommendations are made.  相似文献   

11.
In multivariate stratified sample survey with L strata, let p-characteristics are defined on each unit of the population. To estimate the unknown p-population means of each characteristic, a random sample is taken out from the population. In multivariate stratified sample survey, the optimum allocation of any characteristic may not be optimum for others. Thus the problem arises to find out an allocation which may be optimum for all characteristics in some sense. Therefore a compromise criterion is needed to workout such allocation. In this paper, the procedure of estimation of p-population means is discussed in the presence of nonresponse when the use of linear cost function is not advisable. A solution procedure is suggested by using lexicographic goal programming problem. The numerical illustrations are given for its practical utility.  相似文献   

12.
In this article, an economic design model of the MSE control chart is proposed. The formulated cost function includes the cost incurred in production process and the loss borne by customers because of the shift of means and the drift of variation. The economic bounds where the process is shut down if the search indicates the presence of an assignable cause are also being considered. A program written in Matlab 7.0 is used to determine the optimum parameters including the sample size n, the sample interval h and the width of the control limits k. Finally, an example is given to illustrate the proposed economic design and sensitivity analysis is carried out.  相似文献   

13.
Choquet expected utility maximizers tend to behave in a more “cautious” way than Bayesian agents, i. e. expected utility maximizers. We illustrate this phenomenon in the particular case of betting behavior. Specifically, consider agents who are Choquet expected utility maximizers. Then, if the economy is large, Pareto optimal allocations provide full insurance if and only if the agents share at least on prior, i. e., if the intersection of the core of the capacities representing their beliefs is non empty. In the expected utility case, this is true only if they have a common prior. Received: July 2000; revised version: May 2001  相似文献   

14.
This paper examines the design and performance of sequential experiments where extensive switching is undesirable. Given an objective function to optimize by sampling between Bernoulli populations, two different models are considered. The constraint model restricts the maximum number of switches possible, while the cost model introduces a charge for each switch. Optimal allocation procedures and a new “hyperopic” procedure are discussed and their behavior examined. For the cost model, if one views the costs as control variables then the optimal allocation procedures yield the optimal tradeoff of expected switches vs. expected value of the objective function.  相似文献   

15.
The purpose of this study was to predict placement and nonplacement outcomes for mildly handicapped three through five year old children given knowledge of developmental screening test data. Discrete discriminant analysis (Anderson, 1951; Cochran & Hopkins, 1961; Goldstein & Dillon, 1978) was used to classify children into either a placement or nonplacement group using developmental information retrieved from longitudinal Child Find records (1982-89). These records were located at the Florida Diagnostic and Learning Resource System (FDLRS) in Sarasota, Florida and provided usable data for 602 children. The developmental variables included performance on screening test activities from the Comprehensive Identification Process (Zehrbach, 1975), and consisted of: (a) gross motor skills, (b) expressive language skills, and (c) social-emotional skills. These three dichotomously scored developmental variables generated eight mutually exclusive and exhaustive combinations of screening data. Combined with one of three different types of cost-of-misclassification functions, each child in a random cross-validation sample of 100 was classified into one of the two outcome groups minimizing the expected cost of misclassification based on the remaining 502 children. For each cost function designed by the researchers a comparison was made between classifications from the discrete discriminant analysis procedure and actual placement outcomes for the 100 children. A logit analysis and a standard discriminant analysis were likewise conducted using the 502 children and compared with results of the discrete discriminant analysis for selected cost functions.  相似文献   

16.
This article performs a sensitivity analyses of the synthetic T2 chart using fractional factorial design, which integrates the interaction effects. We are interested in the effects of the input parameters on the optimal cost, chart's parameters, and average run lengths. We also look at the input parameters responsible for the increase in cost and improvement in statistical performance under statistical constraints, and investigate how the input parameters influence the binding effect of the statistical constraints. The sensitivity analyses of the synthetic T2 chart are compared with that of the Hotelling's T2 chart, and parameters responsible for the cost advantage of the synthetic T2 chart are identified.  相似文献   

17.
Three sampling designs are considered for estimating the sum of k population means by the sum of the corresponding sample means. These are (a) the optimal design; (b) equal sample sizes from all populations; and (c) sample sizes that render equal variances to all sample means. Designs (b) and (c) are equally inefficient, and may yield a variance up to k times as large as that of (a). Similar results are true when the cost of sampling is introduced, and they depend on the population sampled.  相似文献   

18.
In this article, we investigate a control policy for the choice of sampling interval and control limit by minimizing the expected quality cost. The study is based on the environment in which (i) the stochastic disturbances are assumed to follow an IMA(1, 1) process, (ii) there is process dynamics between the input series and the output series, (iii) a feedback control scheme is imposed, and (iv) the expected quality cost contains off-target cost, adjustment cost, and inspection cost. Modeling and forecasting for (i), (ii), and (iii) are performed according to the transfer function plus noise model. Minimizing the expected quality cost for (iv) is carried out by a modified pattern search procedure. An example is given to demonstrate the advantage of using the pattern search method over the usual 3-sigma control scheme. The penalty of ignoring the process dynamics and for the case of choosing incorrect value of θ of an IMA(1, 1) disturbance is discussed. The pattern search method is also compared favorably with the modified Taguchi's method in quality cost for the cases considered therein.  相似文献   

19.
In this paper, Duncan's cost model combined Taguchi's quadratic loss function is applied to develop the economic-statistical design of the sum of squares exponentially weighted moving average (SS-EWMA) chart. The genetic algorithm is applied to search for the optimal decision variables of SS-EWMA chart such that the expected cost is minimized. Sensitivity analysis reveals that the optimal sample size and sampling interval decrease; optimal smoothing constant and control limit increase as the mean and/or variance increases. Moreover, the combination of optimal parameter levels in orthogonal array experiment plays an important guideline for monitoring the process mean and/or variance.  相似文献   

20.
In this article, a multiple three-decision procedure is proposed to classify p (≥2) treatments as better or worse than the best of q (≥2) control treatments in one way layout. Critical constants required for the implementation of the proposed procedure are tabulated for some pre-specified values of probability of no misclassification. Power function of the proposed procedure is defined and a common sample size necessary to guarantee various pre-specified power levels are tabulated under two optimal allocation schemes. Finally the implementation of the proposed methodology is demonstrated through numerical examples based on real life data.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号