首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   1219篇
  免费   14篇
管理学   38篇
人口学   4篇
丛书文集   18篇
理论方法论   13篇
综合类   151篇
社会学   15篇
统计学   994篇
  2023年   3篇
  2022年   1篇
  2021年   4篇
  2020年   11篇
  2019年   28篇
  2018年   27篇
  2017年   50篇
  2016年   24篇
  2015年   23篇
  2014年   25篇
  2013年   468篇
  2012年   127篇
  2011年   29篇
  2010年   25篇
  2009年   26篇
  2008年   28篇
  2007年   35篇
  2006年   25篇
  2005年   25篇
  2004年   28篇
  2003年   26篇
  2002年   21篇
  2001年   32篇
  2000年   18篇
  1999年   13篇
  1998年   13篇
  1997年   13篇
  1996年   9篇
  1995年   7篇
  1994年   10篇
  1993年   6篇
  1992年   5篇
  1991年   2篇
  1990年   8篇
  1989年   6篇
  1988年   3篇
  1987年   1篇
  1986年   2篇
  1985年   2篇
  1984年   5篇
  1983年   7篇
  1982年   4篇
  1980年   2篇
  1979年   2篇
  1978年   1篇
  1977年   2篇
  1975年   1篇
排序方式: 共有1233条查询结果,搜索用时 15 毫秒
991.
A notion of data depth is used to measure centrality or outlyingness of a data point in a given data cloud. In the context of data depth, the point (or points) having maximum depth is called as deepest point (or points). In the present work, we propose three multi-sample tests for testing equality of location parameters of multivariate populations by using the deepest point (or points). These tests can be considered as extensions of two-sample tests based on the deepest point (or points). The proposed tests are implemented through the idea of Fisher's permutation test. Performance of earlier tests is studied by simulation. Illustration with two real datasets is also provided.  相似文献   
992.
Fisher's method of combining independent tests is used to construct tests of means of multivariate normal populations when the covariance matrix has intraclass correlation structure. Monte Carlo studies are reported which show that the tests are more powerful than Hotelling's T 2-test in both one and two sample situations.  相似文献   
993.
Outlier tests are developed for multivariate data where there is a structure to the covariance or correlation matrix. Particular structures considered are the block diagonal structure where there are reasons to assume that one set of variables is independent of another, and the equicorrelation structure where it may be assumed that all pairs of variables have the same correlation. Likelihood ratio tests for an outlier are derived for these situations and critical values, under the null hypothesis of no outliers present, are determined for selected sample sizes and dimensions, using Bonferroni bounds or simulation. The powers of the tests are compared with those of the Wilks′ statistic for a variety of situations. It is shown that the test procedures which incorporate knowledge of the correlation structure have considerably greater power than the usual tests particularly in relatively small samples with several dimensions.  相似文献   
994.
A parametric modelling for interval data is proposed, assuming a multivariate Normal or Skew-Normal distribution for the midpoints and log-ranges of the interval variables. The intrinsic nature of the interval variables leads to special structures of the variance–covariance matrix, which is represented by five different possible configurations. Maximum likelihood estimation for both models under all considered configurations is studied. The proposed modelling is then considered in the context of analysis of variance and multivariate analysis of variance testing. To access the behaviour of the proposed methodology, a simulation study is performed. The results show that, for medium or large sample sizes, tests have good power and their true significance level approaches nominal levels when the constraints assumed for the model are respected; however, for small samples, sizes close to nominal levels cannot be guaranteed. Applications to Chinese meteorological data in three different regions and to credit card usage variables for different card designations, illustrate the proposed methodology.  相似文献   
995.
For those who have not recognized the disparate natures of tests of statistical hypotheses and tests of scientific hypotheses, one‐tailed statistical tests of null hypotheses such as ?≤ 0 or ?≥ 0 have often seemed a reasonable procedure. We earlier reviewed the many grounds for not regarding them as such. To have at least some power for detection of effects in the unpredicted direction, several authors have independently proposed the use of lopsided (also termed split‐tailed, directed or one‐and‐a‐half‐tailed) tests, two‐tailed tests with α partitioned unequally between the two tails of the test statistic distribution. We review the history of these proposals and conclude that lopsided tests are never justified. They are based on the same misunderstandings that have led to massive misuse of one‐tailed tests as well as to much needless worry, for more than half a century, over the various so‐called ‘multiplicity problems’. We discuss from a neo‐Fisherian point of view the undesirable properties of multiple comparison procedures based on either (i) maximum potential set‐wise (or family‐wise) type I error rates (SWERs), or (ii) the increasingly fashionable, maximum potential false discovery rates (FDRs). Neither the classical nor the newer multiple comparison procedures based on fixed maximum potential set‐wise error rates are helpful to the cogent analysis and interpretation of scientific data.  相似文献   
996.
Two-sample comparisons belonging to basic class of statistical inference are extensively applied in practice. There is a rich statistical literature regarding different parametric methods to address these problems. In this context, most of the powerful techniques are assumed to be based on normally distributed populations. In practice, the alternative distributions of compared samples are commonly unknown. In this case, one can propose a combined test based on the following decision rules: (a) the likelihood-ratio test (LRT) for equality of two normal populations and (b) the Shapiro–Wilk (S-W) test for normality. The rules (a) and (b) can be merged by, e.g., using the Bonferroni correction technique to offer the correct comparison of the samples distribution. Alternatively, we propose the exact density-based empirical likelihood (DBEL) ratio test. We develop the tsc package as the first R package available to perform the two-sample comparisons using the exact test procedures: the LRT; the LRT combined with the S-W test; as well as the newly developed DBEL ratio test. We demonstrate Monte Carlo (MC) results and a real data example to show an efficiency and excellent applicability of the developed procedure.  相似文献   
997.
There are several statistical hypothesis tests available for assessing normality assumptions, which is an a priori requirement for most parametric statistical procedures. The usual method for comparing the performances of normality tests is to use Monte Carlo simulations to obtain point estimates for the corresponding powers. The aim of this work is to improve the assessment of 9 normality hypothesis tests. For that purpose, random samples were drawn from several symmetric and asymmetric nonnormal distributions and Monte Carlo simulations were carried out to compute confidence intervals for the power achieved, for each distribution, by two of the most usual normality tests, Kolmogorov–Smirnov with Lilliefors correction and Shapiro–Wilk. In addition, the specificity was computed for each test, again resorting to Monte Carlo simulations, taking samples from standard normal distributions. The analysis was then additionally extended to the Anderson–Darling, Cramer-Von Mises, Pearson chi-square Shapiro–Francia, Jarque–Bera, D'Agostino and uncorrected Kolmogorov–Smirnov tests by determining confidence intervals for the areas under the receiver operating characteristic curves. Simulations were performed to this end, wherein for each sample from a nonnormal distribution an equal-sized sample was taken from a normal distribution. The Shapiro–Wilk test was seen to have the best global performance overall, though in some circumstances the Shapiro–Francia or the D'Agostino tests offered better results. The differences between the tests were not as clear for smaller sample sizes. Also to be noted, the SW and KS tests performed generally quite poorly in distinguishing between samples drawn from normal distributions and t Student distributions.  相似文献   
998.
In this paper we compare the power properties of some location tests. The most widely used such test is Student's t. Recently bootstrap-based tests have received much attention in the literature. A bootstrap version of the t-test will be included in our comparison. Finally, the nonparametric tests based on the idea of permuting the signs will be represented in our comparison. Again, we will initially concentrate on a version of that test based on the mean. The permutation tests predate the bootstrap by about fourty years. Theoretical results of Pitman (1937) and Bickel & Freedman (1981) show that these three methods are asymptotically equivalent if the underlying distribution is symmetric and has finite second moment. In the modern literature, the use of the nonparametric techniques is advocated on the grounds that the size of the test would be either exact, or more nearly exact. In this paper we report on a simulation study that compares the power curves and we show that it is not necessary to use resampling tests with a statistic based on the mean of the sample.  相似文献   
999.
In this paper non-parametric tests for homogeneity of several populations against locationtype alternatives are proposed. For this all possible subsamples of fixed size are drawn from each sample and their maxima and minima are computed One class of tests is obtained using these subsample minima whereas other class of tests involves use of sub sample maxima. Tests belonging t o these two classes have been compared with many of the presently available tests in terms of their Pitman asymptotic relative efficiency . Some of the members of these proposed classes of tests prove to robust in terms of efficiency.  相似文献   
1000.
We are interested in comparing logistic regressions for several test treatments or populations with a logistic regression for a standard treatment or population. The research was motivated by some real life problems, which are discussed as data examples. We propose a step-down likelihood ratio method for declaring differences between the test treatments or populations and the standard treatment or population. Competitors based on the sequentially rejective Bonferroni Wald statistic, sequentially rejective exact Wald statistic and Reiers?l's statistic are also discussed. It is shown that the proposed method asymptotically controls the probability of type I error. A Monte Carlo simulation shows that the proposed method performs well for relatively small sample sizes, outperforming its competitors.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号