共查询到20条相似文献,搜索用时 15 毫秒
1.
《Journal of Statistical Computation and Simulation》2012,82(3):181-194
Variance components in factorial designs with balanced data are commonly estimated by equating mean squares to expected mean squares. For unbalanced data, the usual extensions of this approach are the Henderson methods, which require formulas that are rather involved. Alternatively, maximum likelihood estimation based on normality has been proposed. Although the algorithm for maximum likelihood is computationally complex, programs exist in some statistical packages. This article introduces a simpler method, that of creating a balanced data set by resampling from the original one. Revised formulas for expected mean squares are presented for the two-way case; they are easily generalized to larger factorial designs. The results of a number of simulation studies indicate that, in certain types of designs, the proposed method has performance advantages over both the Henderson Method I and maximum likelihood estimators. 相似文献
2.
3.
This paper presents tables and a computer program for determining single sampling plans for given AQL, producer's risk and AOQL for the case of nonconforming units and nonconformities. Comparison with Soundararajan's (1981) procedures for selection of single sampling plans for given (AQL, AOQL) is also given 相似文献
4.
A random effects model for analyzing mixed longitudinal count and ordinal data is presented where the count response is inflated in two points (k and l) and an (k,l)-Inflated Power series distribution is used as its distribution. A full likelihood-based approach is used to obtain maximum likelihood estimates of parameters of the model. For data with non-ignorable missing values models with probit model for missing mechanism are used.The dependence between longitudinal sequences of responses and inflation parameters are investigated using a random effects approach. Also, to investigate the correlation between mixed ordinal and count responses of each individuals at each time, a shared random effect is used. In order to assess the performance of the model, a simulation study is performed for a case that the count response has (k,l)-Inflated Binomial distribution. Performance comparisons of count-ordinal random effect model, Zero-Inflated ordinal random effects model and (k,l)-Inflated ordinal random effects model are also given. The model is applied to a real social data set from the first two waves of the national longitudinal study of adolescent to adult health (Add Health study). In this data set, the joint responses are the number of days in a month that each individual smoked as the count response and the general health condition of each individual as the ordinal response. For the count response there is incidence of excess values of 0 and 30. 相似文献
5.
6.
We show that the asymptotic variance of a generalized L -statistic is a function of the difference between the conditional and unconditional cumulative distribution functions of the kernel used to form the statistic. 相似文献
7.
《Journal of Statistical Computation and Simulation》2012,82(6):473-494
Ordinal data are often modeled using a continuous latent response distribution, which is partially observed through windows of adjacent intervals defined by cutpoints. In this paper we propose the beta distribution as a model for the latent response. The beta distribution has several advantages over the other common distributions used, e.g. , normal and logistic. In particular, it enables separate modeling of location and dispersion effects which is essential in the Taguchi method of robust design. First, we study the problem of estimating the location and dispersion parameters of a single beta distribution (representing a single treatment) from ordinal data assuming known equispaced cutpoints. Two methods of estimation are compared: the maximum likelihood method and the method of moments. Two methods of treating the data are considered: in raw discrete form and in smoothed continuousized form. A large scale simulation study is carried out to compare the different methods. The mean square errors of the estimates are obtained under a variety of parameter configurations. Comparisons are made based on the ratios of the mean square errors (called the relative efficiencies). No method is universally the best, but the maximum likelihood method using continuousized data is found to perform generally well, especially for estimating the dispersion parameter. This method is also computationally much faster than the other methods and does not experience convergence difficulties in case of sparse or empty cells. Next, the problem of estimating unknown cutpoints is addressed. Here the multiple treatments setup is considered since in an actual application, cutpoints are common to all treatments, and must be estimated from all the data. A two-step iterative algorithm is proposed for estimating the location and dispersion parameters of the treatments, and the cutpoints. The proposed beta model and McCullagh's (1980) proportional odds model are compared by fitting them to two real data sets. 相似文献
8.
9.
W. W. Cooper Subhash C. Ray 《Journal of the Royal Statistical Society. Series A, (Statistics in Society)》2008,171(2):433-448
Summary. This is a response to Stone's criticisms of the Spottiswoode report to the UK Treasury which was responding to the Treasury's request for improved methods to evaluate the efficiency and productivity of the 43 police districts in England and Wales. The Spottiswoode report recommended uses of data envelopment analysis (DEA) and stochastic frontier analysis (SFA), which Stone critiqued en route to proposing an alternative approach. Here we note some of the most serious errors in his criticism and inaccurate portrayals of DEA and SFA. Most of our attention is devoted to DEA, and to Stone's recommended alternative approach without much attention to SFA, partly because of his abbreviated discussion of the latter. In our response we attempt to be constructive as well as critical by showing how Stone's proposed approach can be joined to DEA to expand his proposal beyond limitations in his formulations. 相似文献
10.
A study is made of Neyman's C(a) test for testing independence in nonnormal situations. It is shown that it performs very well both in terms of the level of significance and the powereven for smallvalues of the samplesize. Also, in the case of the bivariate Polsson distribution, itis shown that Fisher's z and Student's t transforms of the sample correlation coefficient are good competitors for Neyman's procedure. 相似文献
11.
12.
13.
Chrysoula Dimitriou-Fakalou 《Journal of nonparametric statistics》2019,31(1):31-63
A strictly stationary time series is modelled directly, once the variables' realizations fit into a table: no knowledge of a distribution is required other than the prior discretization. A multiplicative model with combined random ‘Auto-Regressive’ and ‘Moving-Average’ parts is considered for the serial dependence. Based on a multi-sequence of unobserved series that serve as differences and differences of differences from the main building block, a causal version is obtained; a condition that secures an exponential rate of convergence for its expected random coefficients is presented. For the remainder, writing the conditional probability as a function of past conditional probabilities, is within reach: subject to the presence of the moving-average segment in the original equation, what could be a long process of elimination with mathematical arguments concludes with a new derivation that does not support a simplistic linear dependence on the lagged probability values. 相似文献
14.
15.
Bechhofer and Tamhane (1981) proposed a new class of incomplete block designs called BTIB designs for comparing p ≥ 2 test treatments with a control treatment in blocks of equal size k < p + 1. All BTIB designs for given (p,k) can be constructed by forming unions of replications of a set of elementary BTIB designs called generator designs for that (p,k). In general, there are many generator designs for given (p,k) but only a small subset (called the minimal complete set) of these suffices to obtain all admissible BTIB designs (except possibly any equivalent ones). Determination of the minimal complete set of generator designs for given (p,k) was stated as an open problem in Bechhofer and Tamhane (1981). In this paper we solve this problem for k = 3. More specifically, we give the minimal complete sets of generator designs for k = 3, p = 3(1)10; the relevant proofs are given only for the cases p = 3(1)6. Some additional combinatorial results concerning BTIB designs are also given. 相似文献
16.
《Serials Review》1988,14(3):83
17.
Jeanie M Welch 《Serials Review》2013,39(4):283-286
AbstractFor the acquisition of periodicals and indexes, the selector role of subject specialists and reference librarians has been transformed by electronic access. In the past these librarians made independent recommendations for new periodicals, indexes, and abstracts by using traditional selection criteria (e.g., relevance, quality, and cost). With electronic resources, considerations such as licensing negotiations, consortial agreements, and technical issues have complicated the decision-making process and have sometimes removed it from individual librarians or even individual libraries. The author discusses the opportunities for individual librarians, particularly in public service roles, to participate in serials collection management decisions and provides a case study of business periodicals collection management in an academic library. Serials Review 2002; 28:283–286. 相似文献
18.
19.
Stanley Pogrow 《The American statistician》2019,73(1):223-234
ABSTRACTRelying on effect size as a measure of practical significance is turning out to be just as misleading as using p-values to determine the effectiveness of interventions for improving clinical practice in complex organizations such as schools. This article explains how effect sizes have misdirected practice in education and other disciplines. Even when effect size is incorporated into RCT research the recommendations of whether interventions are effective are misleading and generally useless to practitioners. As a result, a new criterion of practical benefit is recommended for evaluating research findings about the effectiveness of interventions in complex organizations where benchmarks of existing performance exist. Practical benefit exists when the unadjusted performance of an experimental group provides a noticeable advantage over an existing benchmark. Some basic principles for determining practical benefit are provided. Practical benefit is more intuitive and is expected to enable leaders to make more accurate assessments as to whether published research findings are likely to produce noticeable improvements in their organizations. In addition, practical benefit is used routinely as the research criterion for the alternative scientific methodology of improvement science that has an established track record of being a more efficient way to develop new interventions that improve practice dramatically than RCT research. Finally, the problems with practical significance suggest that the research community should seek different inferential methods for research designed to improve clinical performance in complex organizations, as compared to methods for testing theories and medicines. 相似文献
20.
Diane M. Lewis 《Serials Review》2013,39(1):149-150