首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 311 毫秒
1.
We apply geometric programming, developed by Duffin, Peterson Zener (1967), to the optimal allocation of stratified samples. As an introduction, we show how geometric programming is used to allocate samples according to Neyman (1934), using the data of Cornell (1947) and following the exposition of Cochran (1953).

Then we use geometric programming to allocate an integrated sample introduced by Schwartz (1978) for more efficient sampling of three U. S. Federal welfare quality control systems, Aid to Families with Dependent Children, Food Stamps and Medicaid.

We develop methods for setting up the allocation problem, interpreting it as a geometric programming primal problem, transforming it to the corresponding dual problem, solving that, and finding the sample sizes required in the allocation problem. We show that the integrated sample saves sampling costs.  相似文献   

2.
In stratified sample surveys, the problem of determining the optimum allocation is well known due to articles published in 1923 by Tschuprow and in 1934 by Neyman. The articles suggest the optimum sample sizes to be selected from each stratum for which sampling variance of the estimator is minimum for fixed total cost of the survey or the cost is minimum for a fixed precision of the estimator. If in a sample survey more than one characteristic is to be measured on each selected unit of the sample, that is, the survey is a multi-response survey, then the problem of determining the optimum sample sizes to various strata becomes more complex because of the non-availability of a single optimality criterion that suits all the characteristics. Many authors discussed compromise criterion that provides a compromise allocation, which is optimum for all characteristics, at least in some sense. Almost all of these authors worked out the compromise allocation by minimizing some function of the sampling variances of the estimators under a single cost constraint. A serious objection to this approach is that the variances are not unit free so that minimizing any function of variances may not be an appropriate objective to obtain a compromise allocation. This fact suggests the use of coefficient of variations instead of variances. In the present article, the problem of compromise allocation is formulated as a multi-objective non-linear programming problem. By linearizing the non-linear objective functions at their individual optima, the problem is approximated to an integer linear programming problem. Goal programming technique is then used to obtain a solution to the approximated problem.  相似文献   

3.
We present a surprising though obvious result that seems to have been unnoticed until now. In particular, we demonstrate the equivalence of two well-known problems—the optimal allocation of the fixed overall sample size n among L strata under stratified random sampling and the optimal allocation of the H = 435 seats among the 50 states for apportionment of the U.S. House of Representatives following each decennial census. In spite of the strong similarity manifest in the statements of the two problems, they have not been linked and they have well-known but different solutions; one solution is not explicitly exact (Neyman allocation), and the other (equal proportions) is exact. We give explicit exact solutions for both and note that the solutions are equivalent. In fact, we conclude by showing that both problems are special cases of a general problem. The result is significant for stratified random sampling in that it explicitly shows how to minimize sampling error when estimating a total TY while keeping the final overall sample size fixed at n; this is usually not the case in practice with Neyman allocation where the resulting final overall sample size might be near n + L after rounding. An example reveals that controlled rounding with Neyman allocation does not always lead to the optimum allocation, that is, an allocation that minimizes variance.  相似文献   

4.
Numerous optimization problems arise in survey designs. The problem of obtaining an optimal (or near optimal) sampling design can be formulated and solved as a mathematical programming problem. In multivariate stratified sample surveys usually it is not possible to use the individual optimum allocations for sample sizes to various strata for one reason or another. In such situations some criterion is needed to work out an allocation which is optimum for all characteristics in some sense. Such an allocation may be called an optimum compromise allocation. This paper examines the problem of determining an optimum compromise allocation in multivariate stratified random sampling, when the population means of several characteristics are to be estimated. Formulating the problem of allocation as an all integer nonlinear programming problem, the paper develops a solution procedure using a dynamic programming technique. The compromise allocation discussed is optimal in the sense that it minimizes a weighted sum of the sampling variances of the estimates of the population means of various characteristics under study. A numerical example illustrates the solution procedure and shows how it compares with Cochran's average allocation and proportional allocation.  相似文献   

5.
Allocation of samples in stratified and/or multistage sampling is one of the central issues of sampling theory. In a survey of a population often the constraints for precision of estimators of subpopulations parameters have to be taken care of during the allocation of the sample. Such issues are often solved with mathematical programming procedures. In many situations it is desirable to allocate the sample, in a way which forces the precision of estimates at the subpopulations level to be both: optimal and identical, while the constraints of the total (expected) size of the sample (or samples, in two-stage sampling) are imposed. Here our main concern is related to two-stage sampling schemes. We show that such problem in a wide class of sampling plans has an elegant mathematical and computational solution. This is done due to a suitable definition of the optimization problem, which enables to solve it through a linear algebra setting involving eigenvalues and eigenvectors of matrices defined in terms of some population quantities. As a final result, we obtain a very simple and relatively universal method for calculating the subpopulation optimal and equal-precision allocation which is based on one of the most standard algorithms of linear algebra (available, e.g., in R software). Theoretical solutions are illustrated through a numerical example based on the Labour Force Survey. Finally, we would like to stress that the method we describe allows to accommodate quite automatically for different levels of precision priority for subpopulations.  相似文献   

6.
The maximum likelihood estimation of parameters of the Poisson binomial distribution, based on a sample with exact and grouped observations, is considered by applying the EM algorithm (Dempster et al, 1977). The results of Louis (1982) are used in obtaining the observed information matrix and accelerating the convergence of the EM algorithm substantially. The maximum likelihood estimation from samples consisting entirely of complete (Sprott, 1958) or grouped observations are treated as special cases of the estimation problem mentioned above. A brief account is given for the implementation of the EM algorithm when the sampling distribution is the Neyman Type A since the latter is a limiting form of the Poisson binomial. Numerical examples based on real data are included.  相似文献   

7.
Modern sampling designs in survey statistics, in general, are constructed in order to optimize the accuracy of estimators such as totals, means and proportions. In stratified random sampling a variance minimal solution was introduced by Neyman and Tschuprov. However, practical constraints may lead to limitations of the domain of sampling fractions which have to be considered within the optimization process. Special attention on the complexity of numerical solutions has to be paid in cases with many strata or when the optimal allocation has to be applied repeatedly, such as in iterative solutions of stratification problems. The present article gives an overview of recent numerical algorithms which allow adequate inclusion of box constraints in the numerical optimization process. These box constraints may play an important role in statistical modeling. Furthermore, a new approach through a fixed point iteration with a finite termination property is presented.  相似文献   

8.
Singh and Sukhatme [4] have considered the problem of optimum stratification on an auxiliary variable x when the units from the different strata are selected with probability proportional to the value of the auxiliary variable and the sample sizes for the different strata are determined by using Neyman allocation method. The present paper considers the same problem for the proportional and equal allocation methods. The rules for finding approximately optimum strata boundaries for these two allocation methods have been given. An investigation into the relative efficiency of these allocation methods with respect to the Neyman allocation has also been made. The performance of equal allocation is found to be better than that of proportional allocation and practically equivalent to the Neyman allocation.  相似文献   

9.

We consider a problem of allocation of a sample in two- and three-stage sampling. We seek allocation which is both multi-domain and population efficient. Choudhry et al. (Survey Methods 38(1):23–29, 2012) recently considered such problem for one-stage stratified simple random sampling without replacement in domains. Their approach was through minimization of the sample size under constraints on relative variances in all domains and on the overall relative variance. To attain this goal, they used nonlinear programming. Alternatively, we minimize here the relative variances in all domains (controlling them through given priority weights) as well as the overall relative variance under constraints imposed on total (expected) cost. We consider several two- and three-stage sampling schemes. Our aim is to shed some light on the analytic structure of solutions rather than in deriving a purely numerical tool for sample allocation. To this end, we develop the eigenproblem methodology introduced in optimal allocation problems in Niemiro and Wesołowski (Appl Math 28:73–82, 2001) and recently updated in Wesołowski and Wieczorkowski (Commun Stat Theory Methods 46(5):2212–2231, 2017) by taking under account several new sampling schemes and, more importantly, by the (single) total expected variable cost constraint. Such approach allows for solutions which are direct generalization of the Neyman-type allocation. The structure of the solution is deciphered from the explicit allocation formulas given in terms of an eigenvector \({\underline{v}}^*\) of a population-based matrix \(\mathbf{D}\). The solution we provide can be viewed as a multi-domain version of the Neyman-type allocation in multistage stratified SRSWOR schemes.

  相似文献   

10.
This paper proposes an economic-statistical design of the EWMA chart with time-varying control limits in which the Taguchi's quadratic loss function is incorporated into the economic-statistical design based on Lorenzen and Vance's economical model. A nonlinear programming with statistical performance constraints is developed and solved to minimize the expected total quality cost per unit time. This model, which is divided into three parts, depends on whether production continues during the period when the assignable cause is being searched for and/or repaired. Through a computational procedure, the optimal decision variables, including the sample size, the sampling interval, the control limit width, and the smoothing constant, can be solved for by each model. It is showed that the optimal economic-statistical design solution can be found from the set of optimal solutions obtained from the statistical design, and both the optimal sample size and sampling interval always decrease as the magnitude of shift increases.  相似文献   

11.
Sampling cost is a crucial factor in sample size planning, particularly when the treatment group is more expensive than the control group. To either minimize the total cost or maximize the statistical power of the test, we used the distribution-free Wilcoxon–Mann–Whitney test for two independent samples and the van Elteren test for randomized block design, respectively. We then developed approximate sample size formulas when the distribution of data is abnormal and/or unknown. This study derived the optimal sample size allocation ratio for a given statistical power by considering the cost constraints, so that the resulting sample sizes could minimize either the total cost or the total sample size. Moreover, for a given total cost, the optimal sample size allocation is recommended to maximize the statistical power of the test. The proposed formula is not only innovative, but also quick and easy. We also applied real data from a clinical trial to illustrate how to choose the sample size for a randomized two-block design. For nonparametric methods, no existing commercial software for sample size planning has considered the cost factor, and therefore the proposed methods can provide important insights related to the impact of cost constraints.  相似文献   

12.
In this paper, a new mixed sampling plan based on the process capability index (PCI) Cpk is proposed and the resultant plan is called mixed variable lot-size chain sampling plan (ChSP). The proposed mixed plan comprises of both attribute and variables inspections. The variable lot-size sampling plan can be used for inspection of attribute quality characteristics and for the inspection of measurable quality characteristics, the variables ChSP based on PCI will be used. We have considered both symmetric and asymmetric fraction non conforming cases for the variables ChSP. Tables are developed for determining the optimal parameters of the proposed mixed plan based on two points on the operating characteristic (OC) approach. In order to construct the tables, the problem is formulated as a non linear programming where the average sample number function is considered as an objective function to be minimized and the lot acceptance probabilities at acceptable quality level and limiting quality level under the OC curve are considered as constraints. The practical implementation of the proposed mixed sampling plan is explained with an illustrative real time example. Advantages of the proposed sampling plan are also discussed in terms of comparison with other existing sampling plans.  相似文献   

13.
Sampling has evolved into a universally accepted approach for gathering information and data mining as it is widely accepted that a reasonably modest-sized sample can sufficiently characterize a much larger population. In stratified sampling designs, the whole population is divided into homogeneous strata in order to achieve higher precision in the estimation. This paper proposes an efficient method of constructing optimum stratum boundaries (OSB) and determining optimum sample size (OSS) for the survey variable. The survey variable may not be available in practice since the variable of interest is unavailable prior to conducting the survey. Thus, the method is based on the auxiliary variable which is usually readily available from past surveys. To illustrate the application as an example using a real data, the auxiliary variable considered for this problem follows Weibull distribution. The stratification problem is formulated as a Mathematical Programming Problem (MPP) that seeks minimization of the variance of the estimated population parameter under Neyman allocation. The solution procedure employs the dynamic programming technique, which results in substantial gains in the precision of the estimates of the population characteristics.  相似文献   

14.
A robust estimator is developed for the location and scale parameters of a location-scale family. The estimator is defined as the minimizer of a minimum distance function that measures the distance between the ranked set sample empirical cumulative distribution function and a possibly misspecified target model. We show that the estimator is asymptotically normal, robust, and has high efficiency with respect to its competitors in literature. It is also shown that the location estimator is consistent within the class of all symmetric distributions whereas the scale estimator is Fisher consistent at the true target model. The paper also considers an optimal allocation procedure that does not introduce any bias due to judgment error classification. It is shown that this allocation procedure is equivalent to Neyman allocation. A numerical efficiency comparison is provided.  相似文献   

15.
In stratified random sampling, it is generally recognised that nonproportional allocation is worthwhile only if the gain in precision is substantial. This note presents a sharp lower bound for the relative precision of proportional to optimum (Neyman) allocation, in terms of the ratio of the largest to the smallest stratum standard deviations. This provides a quick measure of the efficiency of proportional allocation, and may be used as a formal basis for deriving useful practical rules. In particular, it is formally confirmed that for estimating a proportion nonproportional allocation is rarely worthwhile.  相似文献   

16.
To reduce the loss of efficiency in the Neyman allocation caused by using the estimators instead of the unknown strata standard deviations of population, we suggest a compromise allocation that the Neyman allocation using an estimator of the pooled standard deviation of combined strata and the proportional allocation are used together. It is shown that the compromise allocation makes the estimator more efficient than the proportional allocation and the Neyman allocation using the estimated strata standard deviations. Simulation study is carried out for the numerical comparison and the results are reported.  相似文献   

17.
When conducting research with controlled experiments, sample size planning is one of the important decisions that researchers have to make. However, current methods do not adequately address this issue with regard to variance heterogeneity with some cost constraints for comparing several treatment means. This paper proposes a sample size allocation ratio in the fixed-effect heterogeneous analysis of variance when group variances are unequal and in cases where the sampling and/or variable cost has some constraints. The efficient sample size allocation is determined for the purpose of minimizing total cost with a designated power or maximizing the power with a given total cost. Finally, the proposed method is verified by using the index of relative efficiency and the corresponding total cost and the total sample size needed. We also apply our method in a pain management trial to decide an efficient sample size. Simulation studies also show that the proposed sample size formulas are efficient in terms of statistical power. SAS and R codes are provided in the appendix for easy application.  相似文献   

18.
When the information on a highly positively correlated auxiliary variable x is used to construct stratified regression (or ratio) estimates of the population mean of the study variable y, the paper considers the problem of determining approximately optimum strata boundaries (AOSB) on x when the sample size in each stratum is equal. The form of the conditional variance function V(y/x) is assumed to be known. A numerical investigation into the relative efficiency of equal allocation with respect to the Neyman and proportional allocations has also been made. The relative efficiency of equal allocation with respect to Neyman allocation is found to be nearly equal to one.  相似文献   

19.
The Hartley‐Rao‐Cochran sampling design is an unequal probability sampling design which can be used to select samples from finite populations. We propose to adjust the empirical likelihood approach for the Hartley‐Rao‐Cochran sampling design. The approach proposed intrinsically incorporates sampling weights, auxiliary information and allows for large sampling fractions. It can be used to construct confidence intervals. In a simulation study, we show that the coverage may be better for the empirical likelihood confidence interval than for standard confidence intervals based on variance estimates. The approach proposed is simple to implement and less computer intensive than bootstrap. The confidence interval proposed does not rely on re‐sampling, linearization, variance estimation, design‐effects or joint inclusion probabilities.  相似文献   

20.
A new allocation proportion is derived by using differential equation methods for response-adaptive designs. This new allocation is compared with the balanced and the Neyman allocations and the optimal allocation proposed by Rosenberger, Stallard, Ivanova, Harper and Ricks (RSIHR) from an ethical point of view and statistical power performance. The new allocation has the ethical advantages of allocating more than 50% of patients to the better treatment. It also allocates higher proportion of patients to the better treatment than the RSIHR optimal allocation for success probabilities larger than 0.5. The statistical power under the proposed allocation is compared with these under the balanced, the Neyman and Rosenberger's optimal allocations through simulation. The simulation results indicate that the statistical power under the proposed allocation proportion is similar as to those under the balanced, the Neyman and the RSIHR allocations.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号