首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 93 毫秒
1.
Assuming stratified simple random sampling, a confidence interval for a finite population quantile may be desired. Using a confidence interval with endpoints given by order statistics from the combined stratified sample, several procedures to obtain lower bounds (and approximations for the lower bounds) for the confidence coefficients are presented. The procedures differ with respect to the amount of prior information assumed about the var-iate values in the finite population, and the extent to which sample data is used to estimate the lower bounds.  相似文献   

2.
Methods for comparing designs for a random (or mixed) linear model have focused primarily on criteria based on single-valued functions. In general, these functions are difficult to use, because of their complex forms, in addition to their dependence on the model's unknown variance components. In this paper, a graphical approach is presented for comparing designs for random models. The one-way model is used for illustration. The proposed approach is based on using quantiles of an estimator of a function of the variance components. The dependence of these quantiles on the true values of the variance components is depicted by plotting the so-called quantile dispersion graphs (QDGs), which provide a comprehensive picture of the quality of estimation obtained with a given design. The QDGs can therefore be used to compare several candidate designs. Two methods of estimation of variance components are considered, namely analysis of variance and maximum-likelihood estimation.  相似文献   

3.
A number of authors have presented tabulations for the stable distributions based on infinite expansions for the density functions. In this paper we derive exact bounds for the truncation errors in these expansions and use the results to comment on some of the problems that have arisen in tabulating the stable distribution functions. The derivation of the truncation bounds relies on a little-known result for complex argument Taylor series due to Darboux (1876) which IS of much wider applicability than the present context.  相似文献   

4.
The idea of measuring the departure of data bu a plot of obeserved observations against their expectation has been expeetations has been exploited in this paper to develop tests for exponentiality the tests are for parameter two parameter exponential distribution with complete sample and one parameter exponential distribution with complete sample and one large sample distributions of the test statistics critical points have been computed for different levels of significance and applications of these have been computed for differents levels of significance and applications of these tests have been discussed in case of three data sets.  相似文献   

5.
"We provide and illustrate methods for evaluating across-the-board ratio estimation and synthetic estimation, two techniques that might be used for improving population estimates for small areas. The methods emphasize determination of a break-even accuracy of knowledge concerning externally obtained population totals, which marks the point at which improvement occurs." The techniques are illustrated using 1980 U.S. census data.  相似文献   

6.
Six Sigma is the name given to one of the most widely used methodologies for improving efficiency and profitability in manufacturing and production processes. Its users are trained in the basics of statistics, problem solving and project management. Shirley Coleman tells us that Six Sigma is an effective vehicle for introducing the statistics that have been available for years but have not been implemented.  相似文献   

7.
In monitoring clinical trials, the question of futility, or whether the data thus far suggest that the results at the final analysis are unlikely to be statistically successful, is regularly of interest over the course of a study. However, the opposite viewpoint of whether the study is sufficiently demonstrating proof of concept (POC) and should continue is a valuable consideration and ultimately should be addressed with high POC power so that a promising study is not prematurely terminated. Conditional power is often used to assess futility, and this article interconnects the ideas of assessing POC for the purpose of study continuation with conditional power, while highlighting the importance of the POC type I error and the POC type II error for study continuation or not at the interim analysis. Methods for analyzing subgroups motivate the interim analyses to maintain high POC power via an adjusted interim POC significance level criterion for study continuation or testing against an inferiority margin. Furthermore, two versions of conditional power based on the assumed effect size or the observed interim effect size are considered. Graphical displays illustrate the relationship of the POC type II error for premature study termination to the POC type I error for study continuation and the associated conditional power criteria.  相似文献   

8.
Research concerning hospital readmissions has mostly focused on statistical and machine learning models that attempt to predict this unfortunate outcome for individual patients. These models are useful in certain settings, but their performance in many cases is insufficient for implementation in practice, and the dynamics of how readmission risk changes over time is often ignored. Our objective is to develop a model for aggregated readmission risk over time – using a continuous-time Markov chain – beginning at the point of discharge. We derive point and interval estimators for readmission risk, and find the asymptotic distributions for these probabilities. Finally, we validate our derived estimators using simulation, and apply our methods to estimate readmission risk over time using discharge and readmission data for surgical patients.  相似文献   

9.
This paper presents the sinplesr procedure that uses wodular aryithmetic for constructing confounded designs for mixed factorial experiments. The present procedure and the classical one for confounding in symmetrical factorial experiments are both at the same mathema.tical level. The present procedure is written for

practitioners and is lllustrared with several examples.  相似文献   

10.
The classical problem of change point is considered when the data are assumed to be correlated. The nuisance parameters in the model are the initial level μ and the common variance σ2. The four cases, based on none, one, and both of the parameters are known are considered. Likelihood ratio tests are obtained for testing hypotheses regarding the change in level, δ, in each case. Following Henderson (1986), a Bayesian test is obtained for the two sided alternative. Under the Bayesian set up, a locally most powerful unbiased test is derived for the case μ=0 and σ2=1. The exact null distribution function of the Bayesian test statistic is given an integral representation. Methods to obtain exact and approximate critical values are indicated.  相似文献   

11.
Abstract

Fueled by a recent groundswell of support, scholarly publishing organizations are formalizing their focus on equity and inclusion, yet there is still a lack of effective programs and solutions actually in place. A cross-organizational working group is developing antiracism toolkits (for organizations, for allies, and for Black, Indigenous, and People of Color) to transform scholarly publishing workplaces and organizational cultures. The resources, which will be hosted at c4disc.org, provide a common framework for analysis, a shared vocabulary, best practices, and training materials to guide individuals and organizations as they address systemic inequities specific to the scholarly publishing community. Originally planned as a presentation at the NC Serials Conference in March 2020, this article introduces the toolkits and explains why an antiracist framework is essential to transforming scholarly publishing.  相似文献   

12.
Laplace approximations for the Pitman estimators of location or scale parameters, including terms O(n?1), are obtained. The resulting expressions involve the maximum-likelihood estimate and the derivatives of the log-likelihood function up to order 3. The results can be used to refine the approximations for the optimal compromise estimators for location parameters considered by Easton (1991). Some applications and Monte Carlo simulations are discussed.  相似文献   

13.
Summary. The paper is written to inform public discussion on whether or not statistical legislation for the UK is needed and, if so, on its nature and content. A brief account of the background to the current position is given. The Government's stated intention is to create an 'independent statistical service' and a discussion of the meaning of independence in the context of official stat- istics and governance arrangements is provided. Recent international experience is described and the Statistics Acts of some other countries are used to distil the key features of Statistics Acts in other countries. The arguments for and against possible legislation are described. Whether or not a Statistics Act is desirable for the UK depends strongly on the legislation being well framed. There are several key issues on which Parliament would need to develop an informed view and these are set out towards the end of the paper.  相似文献   

14.
In this article, we describe a new approach to compare the power of different tests for normality. This approach provides the researcher with a practical tool for evaluating which test at their disposal is the most appropriate for their sampling problem. Using the Johnson systems of distribution, we estimate the power of a test for normality for any mean, variance, skewness, and kurtosis. Using this characterization and an innovative graphical representation, we validate our method by comparing three well-known tests for normality: the Pearson χ2 test, the Kolmogorov–Smirnov test, and the D'Agostino–Pearson K 2 test. We obtain such comparison for a broad range of skewness, kurtosis, and sample sizes. We demonstrate that the D'Agostino–Pearson test gives greater power than the others against most of the alternative distributions and at most sample sizes. We also find that the Pearson χ2 test gives greater power than Kolmogorov–Smirnov against most of the alternative distributions for sample sizes between 18 and 330.  相似文献   

15.
A Bayes-type estimator is proposed for the worth parameter πi and for the treatment effect parameter ln πi in the Bradley-Terry Model for paired comparison. In contrast to current Bayes estimators which require iterative numberical calculations, this estimator has a closed form expression. This estimation technique is also extended to obtain estimators for the Luce Multiple Comparison Model. An application of this technique to a 23 factorial experiment with paired comparisons is presented.  相似文献   

16.
This article studies computation problem in the context of estimating parameters of linear mixed model for massive data. Our algorithms combine the factored spectrally transformed linear mixed model method with a sequential singular value decomposition calculation algorithm. This combination solves the operation limitation of the method and also makes this algorithm feasible to big dataset, especially when the data has a tall and thin design matrix. Our simulation studies show that our algorithms make the calculation of linear mixed model feasible for massive data on ordinary desktop and have same estimating accuracy with the method based on the whole data.  相似文献   

17.
The exact distribution of a nonparametric test statistic for ordered alternatives, the rank 2 statistic, is computed for small sample sizes. The exact distribution is compared to an approximation.  相似文献   

18.
A practical method is suggested for solving complicated D-optimal design problems analytically. Using this method the author has solved the problem for a quadratic log contrast model for experiments with mixtures introduced by J. Aitchison and J. Bacon-Shone. It is found that for a symmetric subspace of the finite dimensional simplex, the vertices and the centroid of this subspace are the only possible support points for a D-optimal design. The weights that must be assigned to these support points contain irrational numbers and are constrained by a system of three simultaneous linear equations, except for the special cases of 1- and 2-dimensional simplexes where the situation is much simpler. Numerical values for the solution are given up to the 19-dimensional simplex  相似文献   

19.
An expanded class of multiplicative-interaction (M-I) models is proposed for two-way contingency tables. These models a generalization of Goodman's association models, fill in the gap between the independence and the saturated models. Diagnostic rules based on a transformation of the data are proposed for the detection of such models. These rules, utilizing the singular value decomposition of the transformed data, are very easy to use. Maximum likelihood estimation is considered and the computational algorithms discussed. A data set from Goodman (1981) and another from Gabriel and Zamir (1979) are used to demostrate the diagnostic rules.  相似文献   

20.
We studied the way in which experienced statisticians handle the output of a correspondence analysis program, in order to investigate the possibility of computerized support for the interpretation of complex output from statistical software. Four experts participated in this study, all with experience in consultation and with a high level of theoretical knowledge on the subject. In ‘thinking-aloud sessions' they tried to reach conclusions from two forms of output (first without and then with indication of the meaning of variables). We present results on rule-based reasoning by the experts. Suggestions are given for the application of these results in the construction of an intelligent user interface for correspondence analysis.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号