首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In stratified sample surveys, the problem of determining the optimum allocation is well known due to articles published in 1923 by Tschuprow and in 1934 by Neyman. The articles suggest the optimum sample sizes to be selected from each stratum for which sampling variance of the estimator is minimum for fixed total cost of the survey or the cost is minimum for a fixed precision of the estimator. If in a sample survey more than one characteristic is to be measured on each selected unit of the sample, that is, the survey is a multi-response survey, then the problem of determining the optimum sample sizes to various strata becomes more complex because of the non-availability of a single optimality criterion that suits all the characteristics. Many authors discussed compromise criterion that provides a compromise allocation, which is optimum for all characteristics, at least in some sense. Almost all of these authors worked out the compromise allocation by minimizing some function of the sampling variances of the estimators under a single cost constraint. A serious objection to this approach is that the variances are not unit free so that minimizing any function of variances may not be an appropriate objective to obtain a compromise allocation. This fact suggests the use of coefficient of variations instead of variances. In the present article, the problem of compromise allocation is formulated as a multi-objective non-linear programming problem. By linearizing the non-linear objective functions at their individual optima, the problem is approximated to an integer linear programming problem. Goal programming technique is then used to obtain a solution to the approximated problem.  相似文献   

2.
In stratified sampling, usually the cost function is taken as a linear function of sample sizes n h . In many practical situations, the linear cost function does not approximate the actual cost incurred adequately. For example, when the cost of travelling between the units selected in the sample within a stratum is significant, instead of a linear cost function, a cost function that is quadratic in √n h will be a more close approximation to the actual cost. In this paper, the problem is formulated as multi-objective nonlinear integer programming problem with quadratic cost under three different situations, i.e. complete, partial or null information about the population. A numerical example is also presented to illustrate the computational details.  相似文献   

3.
This article deals with the uncertainties in a multivariate stratified sampling problem. The uncertain parameters of the problem, such as stratum standard deviations, measurement costs, travel costs and total budget of the survey, are considered as parabolic fuzzy numbers and the problem is formulated as a fuzzy multi-objective nonlinear programming problem with quadratic cost function. Using α-cut, parabolic fuzzy numbers are defuzzified and then the compromise allocations of the problem are obtained by fuzzy programming for a prescribed value of α. To demonstrate the utility of the proposed problem a numerical example is solved with the help of [LINGO User?s Guid. Lindo Systems Inc., 1415 North Dayton Street, Chicago,Illinois-60622, (USA), 2013] software and the derived compromise optimum allocation is compared with deterministic and proportional allocations.  相似文献   

4.
In multivariate cases, usually the minimization of sampling variances is considered as an objective under a cost constraint. Since the variances are not unit free, it is more logical to consider the minimization of the squared coefficients of variation as an objective. In this paper, the problem of optimum compromise allocation in multivariate stratified sampling in the case of non-response as a multi-objective all-integer nonlinear programming problem is described. A solution procedure using four different approaches is considered, namely the value function, goal programming,∈-constraint and distance based, to obtain the compromise allocation for non-response. A numerical example is also presented to illustrate the computational details.  相似文献   

5.
提出一种模糊多分配p枢纽站中位问题,其中运输成本定义为模糊变量,问题的目标函数是在给定的可信性水平下,最小化总的运输成本。对于梯形和正态运输成本,问题等价于确定的混合整数线性规划问题。在实证分析中,选取了辽宁省煤炭产业的相关面板数据,分析计算在不同可信度水平下煤炭运输枢纽站设立的数量和位置,再利用传统的优化方法(如分枝定界法)求解。经计算,这一模型和求解方法可以用来解决辽宁省煤炭运输的选址问题。  相似文献   

6.
The case of nonresponse in multivariate stratified sampling survey was first introduced by Hansen and Hurwitz in 1946 considering the sampling variances and costs to be deterministic. However, in real life situations sampling variance and cost are often random (stochastic) and have probability distributions. In this article, we have formulated the multivariate stratified sampling in the presence of nonresponse with random sampling variances and costs as a multiobjective stochastic programming problem. Here, the sampling variance and costs are considered random and converted into a deterministic NLPP by using chance constraint and modified E-model. A solution procedure using three different approaches are adopted viz. goal programming, fuzzy programming, and D1 distance method to obtain the compromise allocation for the formulated problem. An empirical study has also been provided to illustrate the computational details.  相似文献   

7.
In this paper, an alternative method for the comparison of two diagnostic systems based on receiver operating characteristic (ROC) curves is presented. ROC curve analysis is often used as a statistical tool for the evaluation of diagnostic systems. However, in general, the comparison of ROC curves is not straightforward, in particular, when they cross each other. A similar difficulty is also observed in the multi-objective optimization field where sets of solutions defining fronts must be compared with a multi-dimensional space. Thus, the proposed methodology is based on a procedure used to compare the performance of distinct multi-objective optimization algorithms. In general, methods based on the area under the ROC curves are not sensitive to the existence of crossing points between the curves. The new approach can deal with this situation and also allows the comparison of partial portions of ROC curves according to particular values of sensitivity and specificity of practical interest. Simulations results are presented. For illustration purposes, considering real data from newborns with very low birthweight, the new method was applied in order to discriminate the better index for evaluating the risk of death.  相似文献   

8.
ABSTRACT

A vast majority of the literature on the design of sampling plans by variables assumes that the distribution of the quality characteristic variable is normal, and that only its mean varies while its variance is known and remains constant. But, for many processes, the quality variable is nonnormal, and also either one or both of the mean and the variance of the variable can vary randomly. In this paper, an optimal economic approach is developed for design of plans for acceptance sampling by variables having Inverse Gaussian (IG) distributions. The advantage of developing an IG distribution based model is that it can be used for diverse quality variables ranging from highly skewed to almost symmetrical. We assume that the process has two independent assignable causes, one of which shifts the mean of the quality characteristic variable of a product and the other shifts the variance. Since a product quality variable may be affected by any one or both of the assignable causes, three different likely cases of shift (mean shift only, variance shift only, and both mean and variance shift) have been considered in the modeling process. For all of these likely scenarios, mathematical models giving the cost of using a variable acceptance sampling plan are developed. The cost models are optimized in selecting the optimal sampling plan parameters, such as the sample size, and the upper and lower acceptance limits. A large set of numerical example problems is solved for all the cases. Some of these numerical examples are also used in depicting the consequences of: 1) using the assumption that the quality variable is normally distributed when the true distribution is IG, and 2) using sampling plans from the existing standards instead of the optimal plans derived by the methodology developed in this paper. Sensitivities of some of the model input parameters are also studied using the analysis of variance technique. The information obtained on the parameter sensitivities can be used by the model users on prudently allocating resources for estimation of input parameters.  相似文献   

9.
In this paper, an optimization model is developed for the economic design of a rectifying inspection sampling plan in the presence of two markets. A product with a normally distributed quality characteristic with unknown mean and variance is produced in the process. The quality characteristic has a lower specification limit. The aim of this paper is to maximize the profit, which consists the Taguchi loss function, under the constraints of satisfying the producer's and consumer's risk in two different markets simultaneously. Giveaway cost per unit of sold excess material is considered in the proposed model. A case study is presented to illustrate the application of proposed methodology. In addition, sensitivity analysis is performed to study the effect of model parameters on the expected profit and optimal solution. Optimal process adjustment problem and acceptance sampling plan is combined in the economical optimization model. Also, process mean and standard deviation are assumed to be unknown value, and their impact is analyzed. Finally, inspection error is considered, and its impact is investigated and analyzed.  相似文献   

10.
Multivariable optimization under large data environment concerns with how to reliably obtain a set of optimization results from a mass of data that influence the object function complexly. This is an important issue in statistical calculation because the complexity between variable parameters leads to repeated statistical calculation analysis and a significant amount of data waste. A statistical multivariable optimization method using improved orthogonal algorithm based on large data is proposed. Considering the optimization problem with multi-parameters under large data environment, a multi-parameter optimization model used for improved orthogonal algorithm is established based on large data. Furthermore, an extensive simulation study on temperature field distribution of anti-/de-icing component was conducted to verify the validity of the statistical calculation analysis optimization method. The optimized temperature field distribution meets the anti-/de-icing requirements through numerical simulation. Simulation results show that the optimization effect is more evident and accurate than the non-optimized temperature distribution with the optimized results of the proposed method. Results verify the effectiveness of the proposed method.  相似文献   

11.
The central topic of this article is the estimation of parameters of the generalized partially linear single-index model (GPLSIM). Two numerical optimization procedures are presented and an S-plus program based on these procedures is compared to a program by Wand in a simulation setting. The results from these simulations indicate that the estimates for the new procedures are as good, if not better, than Wand's. Also, this program is much more flexible than Wand's since it can handle more general models. Other simulations are also conducted. The first compares the effects of using linear interpolation versus spline interpolation in an optimization procedure. The results indicate that by using spline interpolation one gets more stable estimates at a cost of increased computational time. A second simulation was conducted to assess the performance of a method for estimating the variance of alpha. A third set of simulations is carried out to determine the best criterion for testing that one of the elements of alpha is equal to zero. The GPLSIM is applied to a water quality data set and the results indicate an interesting relationship between gastrointestinal illness and turbidity (cloudiness) of drinking water.  相似文献   

12.
Summary.  In high throughput genomic work, a very large number d of hypotheses are tested based on n ≪ d data samples. The large number of tests necessitates an adjustment for false discoveries in which a true null hypothesis was rejected. The expected number of false discoveries is easy to obtain. Dependences between the hypothesis tests greatly affect the variance of the number of false discoveries. Assuming that the tests are independent gives an inadequate variance formula. The paper presents a variance formula that takes account of the correlations between test statistics. That formula involves O ( d 2) correlations, and so a naïve implementation has cost O ( nd 2). A method based on sampling pairs of tests allows the variance to be approximated at a cost that is independent of d .  相似文献   

13.
The statistical problems associated with estimating the mean responding cell density in the limiting dilution assay (LDA) have largely been ignored. We evaluate techniques for analyzing LDA data from multiple biological samples, assumed to follow either a normal or gamma distribution. Simulated data is used to evaluate the performance of an unweighted mean, a log transform, and a weighted mean procedure described by Taswell (1987). In general, an unweighted mean with jackknife estimates will produce satisfactory results. In some cases, a log transform is more appropriate. Taswell's weighted mean algorithm is unable to estimate an accurate variance. We also show that methods which pool samples, or LDA data, are invalid. In addition, we show that optimization of the variance in multiple sample LDA's is dependent on the estimator, the between-organism variance, the replicate well size, and the numberof biological samples. However, this optimization is generally achieved by maximizing biological samples at the expense of well replicates.  相似文献   

14.
We discuss in the present paper the analysis of heteroscedastic regression models and their applications to off-line quality control problems. It is well known that the method of pseudo-likelihood is usually preferred to full maximum likelihood since estimators of the parameters in the regression function obtained are more robust to misspecification of the variance function. Despite its popularity, however, existing theoretical results are difficult to apply and are of limited use in many applications. Using more recent results in estimating equations, we obtain an efficient algorithm for computing the pseudo-likelihood estimator with desirable convergence properties and also derive simple, explicit and easy to apply asymptotic results. These results are used to look in detail at variance minimization in off-line quality control, yielding techniques of inferences for the optimized design parameter. In application of some existing approaches to off-line quality control, such as the dual response methodology, rigorous statistical inference techniques are scarce and difficult to obtain. An example of off-line quality control is presented to discuss the practical aspects involved in the application of the results obtained and to address issues such as data transformation, model building and the optimization of design parameters. The analysis shows very encouraging results, and is seen to be able to unveil some important information not found in previous analyses.  相似文献   

15.
A nonconvex constrained optimization problem is considered in which the constraints are of the form of generalized polynomials. An invexity kernel is established for this class of problem, and a consequent theorem gives sufficient conditions for the solutions of such problems.  相似文献   

16.
When conducting research with controlled experiments, sample size planning is one of the important decisions that researchers have to make. However, current methods do not adequately address this issue with regard to variance heterogeneity with some cost constraints for comparing several treatment means. This paper proposes a sample size allocation ratio in the fixed-effect heterogeneous analysis of variance when group variances are unequal and in cases where the sampling and/or variable cost has some constraints. The efficient sample size allocation is determined for the purpose of minimizing total cost with a designated power or maximizing the power with a given total cost. Finally, the proposed method is verified by using the index of relative efficiency and the corresponding total cost and the total sample size needed. We also apply our method in a pain management trial to decide an efficient sample size. Simulation studies also show that the proposed sample size formulas are efficient in terms of statistical power. SAS and R codes are provided in the appendix for easy application.  相似文献   

17.
We consider consistent tests for stochastic dominance efficiency at any order of a given portfolio with respect to all possible portfolios constructed from a set of assets. We justify block bootstrap approaches to achieve valid inference in a time series setting. The test statistics are computed using linear and mixed integer programming formulations. Monte Carlo results show that the bootstrap procedure performs well in finite samples. The empirical application reveals that the Fama and French market portfolio is first and second-order stochastic dominance efficient, although it is mean–variance inefficient.  相似文献   

18.
Estimators are often defined as the solutions to data dependent optimization problems. A common form of objective function (function to be optimized) that arises in statistical estimation is the sum of a convex function V and a quadratic complexity penalty. A standard paradigm for creating kernel-based estimators leads to such an optimization problem. This article describes an optimization algorithm designed for unconstrained optimization problems in which the objective function is the sum of a non negative convex function and a known quadratic penalty. The algorithm is described and compared with BFGS on some penalized logistic regression and penalized L 3/2 regression problems.  相似文献   

19.
We analyse a naive method using sample mean and sample variance to test the convergence of simulation. We find this method is valid for identically, independently distributed samples, as well as correlated samples with correlation disappearing in long period. Our simulation results on the approximation to bankruptcy probability (BP) show the naive method compares well with the Half-Width, Geweke and CUSUM methods in terms of accuracy and time cost. There are clear evidences of variance reduction from tail-distribution sampling for all convergence test methods when the true BP is very low.  相似文献   

20.
A well-known procedure for the optimization of a second-degree response function over a spherical region of interest is that of ridge analysis. Khuri and Myers (1979) introduced a modification of this procedure by incorporating a certain constraint on the prediction variance. Both procedures, however, assume that the response variable has a constant variance throughtout the experimental region. In the present article, we consider two extensions to Khuri and Myers modifioed ridge analysis. The first extension relaxes the constant variance assumption. In the second extension. generalised linear models are used instead of the trasitional linear model. which are commenly used in responce variables that are not necessaily continusly distribution, including these have discreate distributions, Two examples are presented to illustrate the implementation of the proposed extensions.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号