首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
2.
Measures of multivariate skewness and kurtosis are proposed that are based on the skewness and kurtosis of individual components of standardized sample vectors. Asymptotic properties and small sample critical values of tests for nonnormality based on these measures are provided. It is demonstrated that the tests have favorable power properties. Extensions to time series data are pointed out.  相似文献   

3.
For two‐arm randomized phase II clinical trials, previous literature proposed an optimal design that minimizes the total sample sizes subject to multiple constraints on the standard errors of the estimated event rates and their difference. The original design is limited to trials with dichotomous endpoints. This paper extends the original approach to be applicable to phase II clinical trials with endpoints from the exponential dispersion family distributions. The proposed optimal design minimizes the total sample sizes needed to provide estimates of population means of both arms and their difference with pre‐specified precision. Its applications on data from specific distribution families are discussed under multiple design considerations. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

4.
The design of double acceptance sampling (AS) plans for attributes based on the operating characteristic curve paradigm is usually addressed by enumeration algorithms. These AS plans may be non optimal regarding the sample size to inspect as they were obtained without the requirement that the constraints at the OC curve controlled points are not violated for minimum Average Sample Number (ASN) scenarios. An approach based on mathematical programming is proposed to systematically design double AS plans for attributes, where the characteristics controlled are modelled by binomial or Poisson distributions. Specifically, Mixed Integer Nonlinear Programming (MINLP) formulations are developed and combined with an enumeration algorithm that allows finding ASN minimax optimal plans. A theoretical result is developed with the purpose of assuring the global optimum design is reached by iteration where a convenient solver is used to find local optima. To validate the algorithm, we compare our results with those of tables commonly used for practical purposes, consider different rates of risk, and setups commonly used in Lot Quality Assurance Plans (LQAS) for health monitoring programmes. Finally, we compare AS plans determined for processes described by binomial and Poisson distributions.  相似文献   

5.
Despite tremendous effort on different designs with cross-sectional data, little research has been conducted for sample size calculation and power analyses under repeated measures design. In addition to time-averaged difference, changes in mean response over time (CIMROT) is the primary interest in repeated measures analysis. We generalized sample size calculation and power analysis equations for CIMROT to allow unequal sample size between groups for both continuous and binary measures, through simulation, evaluated the performance of proposed methods, and compared our approach to that of a two-stage model formulization. We also created a software procedure to implement the proposed methods.  相似文献   

6.
Sampling cost is a crucial factor in sample size planning, particularly when the treatment group is more expensive than the control group. To either minimize the total cost or maximize the statistical power of the test, we used the distribution-free Wilcoxon–Mann–Whitney test for two independent samples and the van Elteren test for randomized block design, respectively. We then developed approximate sample size formulas when the distribution of data is abnormal and/or unknown. This study derived the optimal sample size allocation ratio for a given statistical power by considering the cost constraints, so that the resulting sample sizes could minimize either the total cost or the total sample size. Moreover, for a given total cost, the optimal sample size allocation is recommended to maximize the statistical power of the test. The proposed formula is not only innovative, but also quick and easy. We also applied real data from a clinical trial to illustrate how to choose the sample size for a randomized two-block design. For nonparametric methods, no existing commercial software for sample size planning has considered the cost factor, and therefore the proposed methods can provide important insights related to the impact of cost constraints.  相似文献   

7.
M-quantile models with application to poverty mapping   总被引:1,自引:0,他引:1  
Over the last decade there has been growing demand for estimates of population characteristics at small area level. Unfortunately, cost constraints in the design of sample surveys lead to small sample sizes within these areas and as a result direct estimation, using only the survey data, is inappropriate since it yields estimates with unacceptable levels of precision. Small area models are designed to tackle the small sample size problem. The most popular class of models for small area estimation is random effects models that include random area effects to account for between area variations. However, such models also depend on strong distributional assumptions, require a formal specification of the random part of the model and do not easily allow for outlier robust inference. An alternative approach to small area estimation that is based on the use of M-quantile models was recently proposed by Chambers and Tzavidis (Biometrika 93(2):255–268, 2006) and Tzavidis and Chambers (Robust prediction of small area means and distributions. Working paper, 2007). Unlike traditional random effects models, M-quantile models do not depend on strong distributional assumption and automatically provide outlier robust inference. In this paper we illustrate for the first time how M-quantile models can be practically employed for deriving small area estimates of poverty and inequality. The methodology we propose improves the traditional poverty mapping methods in the following ways: (a) it enables the estimation of the distribution function of the study variable within the small area of interest both under an M-quantile and a random effects model, (b) it provides analytical, instead of empirical, estimation of the mean squared error of the M-quantile small area mean estimates and (c) it employs a robust to outliers estimation method. The methodology is applied to data from the 2002 Living Standards Measurement Survey (LSMS) in Albania for estimating (a) district level estimates of the incidence of poverty in Albania, (b) district level inequality measures and (c) the distribution function of household per-capita consumption expenditure in each district. Small area estimates of poverty and inequality show that the poorest Albanian districts are in the mountainous regions (north and north east) with the wealthiest districts, which are also linked with high levels of inequality, in the coastal (south west) and southern part of country. We discuss the practical advantages of our methodology and note the consistency of our results with results from previous studies. We further demonstrate the usefulness of the M-quantile estimation framework through design-based simulations based on two realistic survey data sets containing small area information and show that the M-quantile approach may be preferable when the aim is to estimate the small area distribution function.  相似文献   

8.
This paper proposes an economic-statistical design of the EWMA chart with time-varying control limits in which the Taguchi's quadratic loss function is incorporated into the economic-statistical design based on Lorenzen and Vance's economical model. A nonlinear programming with statistical performance constraints is developed and solved to minimize the expected total quality cost per unit time. This model, which is divided into three parts, depends on whether production continues during the period when the assignable cause is being searched for and/or repaired. Through a computational procedure, the optimal decision variables, including the sample size, the sampling interval, the control limit width, and the smoothing constant, can be solved for by each model. It is showed that the optimal economic-statistical design solution can be found from the set of optimal solutions obtained from the statistical design, and both the optimal sample size and sampling interval always decrease as the magnitude of shift increases.  相似文献   

9.
The problems of an optimal selection of decision function, the design, and the sample size are worked out mainly in separate theories with different objective functions. In applications a unique objective function is given and with respect to this the three components of the statistical approach must be choosen simultaneously. The paper contents initial proposals for such an approach which is demonstrated by the example of parameter estimating in normal distribution. Besides a general theorem of separability and a theorem of duality of two optimization problems of the complete statistical problem are given.  相似文献   

10.
We consider in this article the problem of numerically approximating the quantiles of a sample statistic for a given population, a problem of interest in many applications, such as bootstrap confidence intervals. The proposed Monte Carlo method can be routinely applied to handle complex problems that lack analytical results. Furthermore, the method yields estimates of the quantiles of a sample statistic of any sample size though Monte Carlo simulations for only two optimally selected sample sizes are needed. An analysis of the Monte Carlo design is performed to obtain the optimal choices of these two sample sizes and the number of simulated samples required for each sample size. Theoretical results are presented for the bias and variance of the numerical method proposed. The results developed are illustrated via simulation studies for the classical problem of estimating a bivariate linear structural relationship. It is seen that the size of the simulated samples used in the Monte Carlo method does not have to be very large and the method provides a better approximation to quantiles than those based on an asymptotic normal theory for skewed sampling distributions.  相似文献   

11.
In this article, we propose a double-sampling (DS) np control chart. We assume that the time interval between samples is fixed. The choice of the design parameters of the proposed chart and also comparisons between charts are based on statistical properties, such as the average number of samples until a signal. The optimal design parameters of the proposed control chart are obtained. During the optimization procedure, constraints are imposed on the in-control average sample size and on the in-control average run length. In this way, required statistical properties can be assured. Varying some input parameters, the proposed DS np chart is compared with the single-sampling np chart, variable sample size np chart, CUSUM np and EWMA np charts. The comparisons are carried out considering the optimal design for each chart. For the ranges of parameters considered, the DS scheme is the fastest one for the detection of increases of 100% or more in the fraction non-conforming and, moreover, the DS np chart is easy to operate.  相似文献   

12.
Two-stage k-sample designs for the ordered alternative problem   总被引:2,自引:0,他引:2  
In preclinical studies and clinical dose-ranging trials, the Jonckheere-Terpstra test is widely used in the assessment of dose-response relationships. Hewett and Spurrier (1979) presented a two-stage analog of the test in the context of large sample sizes. In this paper, we propose an exact test based on Simon's minimax and optimal design criteria originally used in one-arm phase II designs based on binary endpoints. The convergence rate of the joint distribution of the first and second stage test statistics to the limiting distribution is studied, and design parameters are provided for a variety of assumed alternatives. The behavior of the test is also examined in the presence of ties, and the proposed designs are illustrated through application in the planning of a hypercholesterolemia clinical trial. The minimax and optimal two-stage procedures are shown to be preferable as compared with the one-stage procedure because of the associated reduction in expected sample size for given error constraints.  相似文献   

13.
Single-arm one- or multi-stage study designs are commonly used in phase II oncology development when the primary outcome of interest is tumor response, a binary variable. Both two- and three-outcome designs are available. Simon two-stage design is a well-known example of two-outcome designs. The objective of a two-outcome trial is to reject either the null hypothesis that the objective response rate (ORR) is less than or equal to a pre-specified low uninteresting rate or to reject the alternative hypothesis that the ORR is greater than or equal to some target rate. Three-outcome designs proposed by Sargent et al. allow a middle gray decision zone which rejects neither hypothesis in order to reduce the required study size. We propose new two- and three-outcome designs with continual monitoring based on Bayesian posterior probability that meet frequentist specifications such as type I and II error rates. Futility and/or efficacy boundaries are based on confidence functions, which can require higher levels of evidence for early versus late stopping and have clear and intuitive interpretations. We search in a class of such procedures for optimal designs that minimize a given loss function such as average sample size under the null hypothesis. We present several examples and compare our design with other procedures in the literature and show that our design has good operating characteristics.  相似文献   

14.
Variance estimation of changes requires estimates of variances and covariances that would be relatively straightforward to make if the sample remained the same from one wave to the next, but this is rarely the case in practice as successive waves are usually different overlapping samples. The author proposes a design‐based estimator for covariance matrices that is adapted to this situation. Under certain conditions, he shows that his approach yields non‐negative definite estimates for covariance matrices and therefore positive variance estimates for a large class of measures of change.  相似文献   

15.
The phase II clinical trials often use the binary outcome. Thus, accessing the success rate of the treatment is a primary objective for the phase II clinical trials. Reporting confidence intervals is a common practice for clinical trials. Due to the group sequence design and relatively small sample size, many existing confidence intervals for phase II trials are much conservative. In this paper, we propose a class of confidence intervals for binary outcomes. We also provide a general theory to assess the coverage of confidence intervals for discrete distributions, and hence make recommendations for choosing the parameter in calculating the confidence interval. The proposed method is applied to Simon's [14] optimal two-stage design with numerical studies. The proposed method can be viewed as an alternative approach for the confidence interval for discrete distributions in general.  相似文献   

16.
Allocation of samples in stratified and/or multistage sampling is one of the central issues of sampling theory. In a survey of a population often the constraints for precision of estimators of subpopulations parameters have to be taken care of during the allocation of the sample. Such issues are often solved with mathematical programming procedures. In many situations it is desirable to allocate the sample, in a way which forces the precision of estimates at the subpopulations level to be both: optimal and identical, while the constraints of the total (expected) size of the sample (or samples, in two-stage sampling) are imposed. Here our main concern is related to two-stage sampling schemes. We show that such problem in a wide class of sampling plans has an elegant mathematical and computational solution. This is done due to a suitable definition of the optimization problem, which enables to solve it through a linear algebra setting involving eigenvalues and eigenvectors of matrices defined in terms of some population quantities. As a final result, we obtain a very simple and relatively universal method for calculating the subpopulation optimal and equal-precision allocation which is based on one of the most standard algorithms of linear algebra (available, e.g., in R software). Theoretical solutions are illustrated through a numerical example based on the Labour Force Survey. Finally, we would like to stress that the method we describe allows to accommodate quite automatically for different levels of precision priority for subpopulations.  相似文献   

17.
This paper addresses the optimal design problems for constant-stress accelerated degradation test (CSADT) based on gamma processes with fixed effect and random effect. For three optimization criteria, we prove that optimal CSADT plans with multiple stress levels degenerate to two-stress-level test plans only using the minimum and maximum stress levels under model assumptions. Under each optimization criterion, the optimal sample size allocation proportions for the minimum and maximum stress levels are determined theoretically. The effect of the stress level on the objective functions is also discussed. A numerical example and a simulation study are provided to illustrate the obtained results.  相似文献   

18.
When the X ¥ control chart is used to monitor a process, three parameters should be determined: the sample size, the sampling interval between successive samples, and the control limits of the chart. Duncan presented a cost model to determine the three parameters for an X ¥ chart. Alexander et al. combined Duncan's cost model with the Taguchi loss function to present a loss model for determining the three parameters. In this paper, the Burr distribution is employed to conduct the economic-statistical design of X ¥ charts for non-normal data. Alexander's loss model is used as the objective function, and the cumulative function of the Burr distribution is applied to derive the statistical constraints of the design. An example is presented to illustrate the solution procedure. From the results of the sensitivity analyses, we find that small values of the skewness coefficient have no significant effect on the optimal design; however, a larger value of skewness coefficient leads to a slightly larger sample size and sampling interval, as well as wider control limits. Meanwhile, an increase on the kurtosis coefficient results in an increase on the sample size and wider control limits.  相似文献   

19.
A variable delay process sampling procedure is considered in a Markov chain structure. The paper extends the basic sampling method given in Arnold (1970). Analytic properties of the process are developed for expected sample size and distribution of the sample size. A primary concern in the paper is the development of an objective function to enhance ability to select optimal sampling policies. The objective function involves cost due to sampling and protection costs for detecting undesirable conditions.  相似文献   

20.
We introduce estimation and test procedures through divergence minimization for models satisfying linear constraints with unknown parameter. These procedures extend the empirical likelihood (EL) method and share common features with generalized empirical likelihood approach. We treat the problems of existence and characterization of the divergence projections of probability distributions on sets of signed finite measures. We give a precise characterization of duality, for the proposed class of estimates and test statistics, which is used to derive their limiting distributions (including the EL estimate and the EL ratio statistic) both under the null hypotheses and under alternatives or misspecification. An approximation to the power function is deduced as well as the sample size which ensures a desired power for a given alternative.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号