首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
An overview of risk-adjusted charts   总被引:2,自引:1,他引:1  
Summary.  The paper provides an overview of risk-adjusted charts, with examples based on two data sets: the first consisting of outcomes following cardiac surgery and patient factors contributing to the Parsonnet score; the second being age–sex-adjusted death-rates per year under a single general practitioner. Charts presented include the cumulative sum (CUSUM), resetting sequential probability ratio test, the sets method and Shewhart chart. Comparisons between the charts are made. Estimation of the process parameter and two-sided charts are also discussed. The CUSUM is found to be the least efficient, under the average run length (ARL) criterion, of the resetting sequential probability ratio test class of charts, but the ARL criterion is thought not to be sensible for comparisons within that class. An empirical comparison of the sets method and CUSUM, for binary data, shows that the sets method is more efficient when the in-control ARL is small and more efficient for a slightly larger range of in-control ARLs when the change in parameter being tested for is larger. The Shewart p -chart is found to be less efficient than the CUSUM even when the change in parameter being tested for is large.  相似文献   

2.
Consider the following problem. There are exactly two defective (unknown) elements in the set X={x1, x2,…,xn}, all possibilities occuring with equal probabilities. We want to identify the unknown (defective) elements by testing some subsets A of X, and for each such set A determining whether A contains any of them. The test on an individual subset A informs us that either all elements of the tested set A are good, or that at least one of them is defective (but we do not know which ones or how many). A set containing at least one defective element is said to be defective. Our aim is to minimize the maximal number of tests. For the optimal strategy, let the maximal test length be denoted by l2(n). We obtain the value of this function for an infinite sequence of values of n.  相似文献   

3.
In experiments with mixtures involving process variables, orthogonal block designs may be used to allow estimation of the parameters of the mixture components independently of estimation of the parameters of the process variables. In the class of orthogonally blocked designs based on pairs of suitably chosen Latin squares, the optimal designs consist primarily of binary blends of the mixture components, regardless of how many ingredients are available for the mixture. This paper considers ways of modifying these optimal designs so that some or all of the runs used in the experiment include a minimum proportion of each mixture ingredient. The designs considered are nearly optimal in the sense that the experimental points are chosen to follow ridges of maxima in the optimality criteria. Specific designs are discussed for mixtures involving three and four components and distinctions are identified for different designs with the same optimality properties. The ideas presented for these specific designs are readily extended to mixtures with q>4 components.  相似文献   

4.
A simple procedure for specifying a histogram with variable cell sizes is proposed. The procedure chooses a set of cutpoints that maximizes a criterion function based on the sample spacings:Under some conditions, this estimated set of cutpoints is shown to converge in probability to the theoretical set of cutpoints for the histogram estimate that minimizes the Hellingerdistance to the underlying density. An algorithm for finding the set of cutpoints that numerically maximizes the criterion function is presented along with an example. Performance for finite sample sizes is evaluated by simulations.  相似文献   

5.
A new allocation proportion is derived by using differential equation methods for response-adaptive designs. This new allocation is compared with the balanced and the Neyman allocations and the optimal allocation proposed by Rosenberger, Stallard, Ivanova, Harper and Ricks (RSIHR) from an ethical point of view and statistical power performance. The new allocation has the ethical advantages of allocating more than 50% of patients to the better treatment. It also allocates higher proportion of patients to the better treatment than the RSIHR optimal allocation for success probabilities larger than 0.5. The statistical power under the proposed allocation is compared with these under the balanced, the Neyman and Rosenberger's optimal allocations through simulation. The simulation results indicate that the statistical power under the proposed allocation proportion is similar as to those under the balanced, the Neyman and the RSIHR allocations.  相似文献   

6.
In this paper, given an arbitrary fixed target sample size, we describe a sequential allocation scheme for comparing two competing treatments in clinical trials. The proposed scheme is a compromise between ethical and optimum allocations. Using some specific probability models, we have shown that, for estimating the risk difference (RD) between two treatment effects, the scheme provides smaller variance than that provided by the corresponding fixed sample size equal allocation sampling scheme.  相似文献   

7.
A unit ω is to be classified into one of two correlated homoskedastic normal populations by linear discriminant function known as W classification statistic [T.W. Anderson, An asymptotic expansion of the distribution of studentized classification statistic, Ann. Statist. 1 (1973), pp. 964–972; T.W. Anderson, An Introduction to Multivariate Statistical Analysis, 2nd edn, Wiley, New York, 1984; G.J. Mclachlan, Discriminant Analysis and Statistical Pattern Recognition, John Wiley and Sons, New York, 1992]. The two populations studied here are two different states of the same population, like two different states of a disease where the population is the population of diseased patient. When a sample unit is observed in both the states (populations), the observations made on it (which form a pair) become correlated. A training sample is unbalanced when not all sample units are observed in both the states. Paired and also unbalanced samples are natural in studies related to correlated populations. S. Bandyopadhyay and S. Bandyopadhyay [Choosing better training sample for classifying an individual into one of two correlated normal populations, Calcutta Statist. Assoc. Bull. 54(215–216) (2003), pp. 167–180] studied the effect of unbalanced training sample structure on the performance of W statistics in the univariate correlated normal set-up for finding optimal sampling strategy for a better classification rate. In this study, the results are extended to the multivariate case with discussion on application in real scenario.  相似文献   

8.
Abstract: The predictor that minimizes mean-squared prediction error is used to derive a goodness-of-fit measure that offers an asymptotically valid model selection criterion for a wide variety of regression models. In particular, a new goodness-of-fit criterion (cr2) is proposed for censored or otherwise limited dependent variables. The new goodness-of-fit measure is then applied to the analysis of duration.  相似文献   

9.
The predictor that minimizes mean-squared prediction error is used to derive a goodness-of-fit measure that offers an asymptotically valid model selection criterion for a wide variety of regression models. In particular, a new goodness-of-fit criterion (cr2) is proposed for censored or otherwise limited dependent variables. The new goodness-of-fit measure is then applied to the analysis of duration.  相似文献   

10.
Mangat and Singh (1990) have suggested a two stage randomized response technique to estimate the proportion of population possessing a sensitive attribute. The procedure was shown to be more efficient than the procedure due to Warner (1965). Recently, Tracy and Osahan (1993) have suggested a modification to the Mangat and Singh (1990) procedure which results in a more efficient strategy in practice. In this paper we propose a modification to the Tracy and Osahan (1993) procedure. The modified procedure is a generalization of Tracy and Osahan (1993) and is always more efficient than their strategy. An empirical study has also been undertaken to find the extent of relative efficiency.  相似文献   

11.
12.
Two variations of a simple monotunic algorithm for computing optimal designs on a finite design space are presented. Various properties are listed. Comparisons witn other algorithms are made.  相似文献   

13.
A new optimization algorithm is presented to solve the stratification problem. Assuming the number L of strata and the total sample size n are fixed, we obtain strata boundaries by using an objective function associated with the variance. In this problem, strata boundaries must be determined so that the elements in each stratum are more homogeneous among themselves. To produce more homogeneous strata, this paper proposes a new algorithm that uses the Greedy Randomized Adaptive Search Procedure (GRASP) methodology. Computational results are presented for a set of problems, with the application of the new algorithm and some algorithms from literature.  相似文献   

14.
The problem of selecting the bandwidth for optimal kernel density estimation at a point is considered. A class of local bandwidth selectors which minimize smoothed bootstrap estimates of mean-squared error in density estimation is introduced. It is proved that the bandwidth selectors in the class achieve optimal relative rates of convergence, dependent upon the local smoothness of the target density. Practical implementation of the bandwidth selection methodology is discussed. The use of Gaussian-based kernels to facilitate computation of the smoothed bootstrap estimate of mean-squared error is proposed. The performance of the bandwidth selectors is investigated empirically.  相似文献   

15.
This paper describes an efficient algorithm for the construction of optimal or near-optimal resolvable incomplete block designs (IBDs) for any number of treatments v < 100. The performance of this algorithm is evaluated against known lattice designs and the 414 or-designs of Patterson & Williams [36]. For the designs under study, it appears that our algorithm is about equally effective as the simulated annealing algorithm of Venables & Eccleston [42]. An example of the use of our algorithm to construct the row (or column) components of resolvable row-column designs is given.  相似文献   

16.
In this paper the researchers are presenting an upper bound for the distribution function of quadratic forms in normal vector with mean zero and positive definite covariance matrix. They also will show that the new upper bound is more precise than the one introduced by Okamoto [4] and the one introduced by Siddiqui [5]. Theoretical Error bounds for both, the new and Okamoto upper bounds are derived in this paper. For larger number of terms in any given positive definite quadratic form, a rough and easier upper bound is suggested.  相似文献   

17.
Optimal experimental design for estimation of the hemodynamic response function (HRF) is investigated using a nonlinear model with a quadratic mean squared error design criterion. This criterion is used, along with a genetic algorithm, to select locally optimal designs that are shown to be, in most cases, more efficient than designs selected with the more commonly used linear expansion criterion. These designs are also shown to result in lower overall asymptotic estimator variance and bias. The investigation focuses on a single stimulus type, but the criterion can also be used with multiple stimulus types.  相似文献   

18.
The Shewhart p-chart or np-chart is commonly used for monitoring the counts of non-conforming items which are usually well modelled by a binomial distribution with parameters n and p where n is the number of items inspected each time and p is the process fraction of non-conforming items produced. It is well known that the Shewhart chart is not sensitive to small shifts in p. The cumulative sum (CUSUM) chart is a far more powerful charting procedure for detecting small shifts in p and only marginally less powerful in detecting large shifts in p. The choice of chart parameters of a Shewhart chart is well documented in the quality control literature. On the other hand, very little has been done for the more powerful CUSUM chart, possibly due to the fact that the run length distribution of a CUSUM chart is much harder to compute. An optimal design strategy is given here which allows the chart parameters of an optimal CUSUM chart to be determined easily. Optimal choice of n and the relationship between the CUSUM chart and the sequential probability ratio test are also investigated.  相似文献   

19.
The long computational time required in constructing optimal designs for computer experiments has limited their uses in practice. In this paper, a new algorithm for constructing optimal experimental designs is developed. There are two major developments involved in this work. One is on developing an efficient global optimal search algorithm, named as enhanced stochastic evolutionary (ESE) algorithm. The other is on developing efficient methods for evaluating optimality criteria. The proposed algorithm is compared to existing techniques and found to be much more efficient in terms of the computation time, the number of exchanges needed for generating new designs, and the achieved optimality criteria. The algorithm is also very flexible to construct various classes of optimal designs to retain certain desired structural properties.  相似文献   

20.
Allocation of samples in stratified and/or multistage sampling is one of the central issues of sampling theory. In a survey of a population often the constraints for precision of estimators of subpopulations parameters have to be taken care of during the allocation of the sample. Such issues are often solved with mathematical programming procedures. In many situations it is desirable to allocate the sample, in a way which forces the precision of estimates at the subpopulations level to be both: optimal and identical, while the constraints of the total (expected) size of the sample (or samples, in two-stage sampling) are imposed. Here our main concern is related to two-stage sampling schemes. We show that such problem in a wide class of sampling plans has an elegant mathematical and computational solution. This is done due to a suitable definition of the optimization problem, which enables to solve it through a linear algebra setting involving eigenvalues and eigenvectors of matrices defined in terms of some population quantities. As a final result, we obtain a very simple and relatively universal method for calculating the subpopulation optimal and equal-precision allocation which is based on one of the most standard algorithms of linear algebra (available, e.g., in R software). Theoretical solutions are illustrated through a numerical example based on the Labour Force Survey. Finally, we would like to stress that the method we describe allows to accommodate quite automatically for different levels of precision priority for subpopulations.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号