首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   129篇
  免费   2篇
丛书文集   1篇
综合类   3篇
社会学   3篇
统计学   124篇
  2020年   1篇
  2019年   6篇
  2018年   3篇
  2017年   7篇
  2015年   4篇
  2014年   5篇
  2013年   53篇
  2012年   13篇
  2011年   4篇
  2010年   5篇
  2009年   5篇
  2008年   3篇
  2007年   3篇
  2006年   4篇
  2005年   2篇
  2004年   1篇
  2003年   2篇
  2002年   1篇
  2001年   1篇
  2000年   1篇
  1999年   1篇
  1998年   1篇
  1997年   1篇
  1995年   1篇
  1991年   1篇
  1984年   1篇
  1981年   1篇
排序方式: 共有131条查询结果,搜索用时 413 毫秒
121.
ABSTRACT

In the mathematical statistics, in order to close approximately to the cumulative distribution function of standard normal distribution, the Fisher z transformation is the widely employed explicit elementary function, and is used to estimate the confidence interval of Pearson product moment correlation coefficient. A new Sigmoid-like function is suggested to replace the Fisher z transformation, and the new explicit elementary function is not more complicated than the Fisher z transformation. The new Sigmoid-like function can be 4.677 times more accurate than the Fisher z transformation.  相似文献   
122.
Frequently, contingency tables are generated in a multinomial sampling. Multinomial probabilities are then organized in a table assigning probabilities to each cell. A probability table can be viewed as an element in the simplex. The Aitchison geometry of the simplex identifies independent probability tables as a linear subspace. An important consequence is that, given a probability table, the nearest independent table is obtained by orthogonal projection onto the independent subspace. The nearest independent table is identified as that obtained by the product of geometric marginals, which do not coincide with the standard marginals, except in the independent case. The original probability table is decomposed into orthogonal tables, the independent and the interaction tables. The underlying model is log-linear, and a procedure to test independence of a contingency table, based on a multinomial simulation, is developed. Its performance is studied on an illustrative example.  相似文献   
123.
Abstract

This paper develops a skewed extension of the type III generalized logistic distribution and presents the analytical equations for the computation of its moments, cumulative probabilities and quantile values. It is demonstrated through an example that the distribution provides an excellent fit to data characterized by skewness and excess kurtosis.  相似文献   
124.
The exponentiated Weibull family, a Weibull extension obtained by adding a second shape parameter, consists of regular distributions with bathtub shaped, unimodal and a broad variety of monotone hazard rates. It can be used for modeling lifetime data from reliability, survival and population studies, various extreme value data, and for constructing isotones of the tests of the composite hypothesis of exponentiality. The structural analysis of the family in this paper includes study of its skewness and kurtosis properties, density shapes and tail character, and the associated extreme value and extreme spacings distributions. Its usefulness in modeling extreme value data is illustrated using the floods of the Floyd River at James, Iowa.  相似文献   
125.
This paper establishes a sampling theory for an inverted linear combination of two dependent F-variates. It is found that the random variable is approximately expressible in terms of a mixture of weighted beta distributions. Operational results, including rth-order raw moments and critical values of the density are subsequently obtained by using the Pearson Type I approximation technique. As a contribution to the probability theory, our findings extend Lee & Hu's (1996) recent investigation on the distribution of the linear compound of two independent F-variates. In terms of relevant applied works, our results refine Dickinson's (1973) inquiry on the distribution of the optimal combining weights estimates based on combining two independent rival forecasts, and provide a further advancement to the general case of combining three independent competing forecasts. Accordingly, our conclusions give a new perception of constructing the confidence intervals for the optimal combining weights estimates studied in the literature of the linear combination of forecasts.  相似文献   
126.
The authors describe a new method for constructing confidence intervals. Their idea consists in specifying the cutoff points in terms of a function of the target parameter rather than as constants. When it is suitably chosen, this so‐called tail function yields shorter confidence intervals in the presence of prior information. It can also be used to improve the coverage properties of approximate confidence intervals. The authors illustrate their technique by application to interval estimation of the mean of Bernoulli and normal populations. They further suggest guidelines for choosing the optimal tail function and discuss the relationship with Bayesian inference.  相似文献   
127.
A class of permutation techniques is presented for the randomized block design. This class is specifically devised for analyses involving multivariate data. A numerical example illustrates an application based on multivariate data. Many well known techniques are special cases of this class. Among these special cases are (i) the permutation version of the classical univariate technique for randomized blocks which 1s associated with analysis of variance, (ii) the Friedman randomized block test, (iii) one-sample matched-pair tests, (iv) the Pearson correlation measure, and (v) the Spearman rank correlation and foot-rule measures. Furthermore, variations and multivariate versions among this class suggest a variety of new techniques which have not received any previous attention.  相似文献   
128.
A closed-form expression is presented for the probability integral of the Pearson Type IV distribution, and a corresponding method of evaluation is given. This analysis addresses a long-standing gap in the theory of the Pearson system of distributions. In addition, a simple derivation is given of an expression for the normalizing constant in the Type IV integral.  相似文献   
129.
In this paper we focus on the well-known test of goodness of fit for comparing observed counts to expected values under some null hypothesis. When the latter is rejected, we propose a simple method for detecting which subset(s) of category counts provoke(s) that rejection. The approach aims at building intervals iteratively and drawing appropriate conclusions on that basis. We discuss this method with respect to other classical approaches. We illustrate our purpose by treating some examples.  相似文献   
130.
Correspondence analysis is a versatile statistical technique that allows the user to graphically identify the association that may exist between variables of a contingency table. For two categorical variables, the classical approach involves applying singular value decomposition to the Pearson residuals of the table. These residuals allow for one to use a simple test to determine those cells that deviate from what is expected under independence. However, the assumptions concerning these residuals are not always satisfied and so such results can lead to questionable conclusions.One may consider instead, an adjustment of the Pearson residual, which is known to have properties associated with the standard normal distribution. This paper explores the application of these adjusted residuals to correspondence analysis and determines how they impact upon the configuration of points in the graphical display.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号