首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   3076篇
  免费   81篇
管理学   482篇
民族学   24篇
人才学   1篇
人口学   269篇
丛书文集   27篇
理论方法论   309篇
综合类   34篇
社会学   1597篇
统计学   414篇
  2023年   24篇
  2022年   18篇
  2021年   30篇
  2020年   72篇
  2019年   78篇
  2018年   91篇
  2017年   129篇
  2016年   116篇
  2015年   85篇
  2014年   93篇
  2013年   460篇
  2012年   127篇
  2011年   112篇
  2010年   75篇
  2009年   96篇
  2008年   90篇
  2007年   77篇
  2006年   83篇
  2005年   98篇
  2004年   97篇
  2003年   86篇
  2002年   86篇
  2001年   85篇
  2000年   70篇
  1999年   51篇
  1998年   38篇
  1997年   33篇
  1996年   47篇
  1995年   36篇
  1994年   34篇
  1993年   52篇
  1992年   39篇
  1991年   35篇
  1990年   26篇
  1989年   22篇
  1988年   24篇
  1987年   24篇
  1986年   23篇
  1985年   22篇
  1984年   32篇
  1983年   24篇
  1982年   28篇
  1981年   20篇
  1980年   20篇
  1979年   21篇
  1978年   13篇
  1977年   22篇
  1975年   21篇
  1974年   19篇
  1973年   10篇
排序方式: 共有3157条查询结果,搜索用时 15 毫秒
61.
There is a considerable amount of literature dealing with inference about the parameters in a heteroscedastic one-way random-effects ANOVA model. In this paper, we primarily address the problem of improved quadratic estimation of the random-effect variance component. It turns out that such estimators with a smaller mean squared error compared with some standard unbiased quadratic estimators exist under quite general conditions. Improved estimators of the error variance components are also established.  相似文献   
62.
One method of controlling the quality of incoming lots is through attribute sampling. To simultaneously control several (possibly dependent) attributes, properly chosen single attribute sampling plans can be merged into a multiple attribute sampling plan. The general form of such a plan is given and various alternatives are discussed. The multinomial distribution is used to develop formulae necessary for an analysis of a multiple attribute plan. Due to the lengthy nature of the calculations involved, a computer algorithm is outlined.  相似文献   
63.
The Cornish-Fisher expansion of the Pearson type VI distribution is known to be reasonably accurate when both degrees of freedom are relatively large (say greater than or equal to 5). However, when either or both degrees of freedom are less than 5, the accuracy of the computed percentage point begins to suffer; in some cases severely. To correct for this, the error surface in the degrees of freedom plane is modeled by least squares curve fitting for selected levels of tail probability (.025, .05, and .10) which can be used to adjust the percentage point obtained from the usual Cornish-Fisher expansion. This adjustment procedure produces a computing algorithm that computes percentage points of the Pearson type VI distribution at the above probability levels, accurate to at least + 1 in 3 digits in approximately 11 milliseconds per subroutine call on an IBM 370/145. This adjusted routine is valid for both degrees of freedom greater than or equal to 1.  相似文献   
64.
Modeling data that are non-normally distributed with random effects is the major challenge in analyzing binomial data in split-plot designs. Seven methods for analyzing such data using mixed, generalized linear, or generalized linear mixed models are compared for the size and power of the tests. This study shows that analyzing random effects properly is more important than adjusting the analysis for non-normality. Methods based on mixed and generalized linear mixed models hold Type I error rates better than generalized linear models. Mixed model methods tend to have higher power than generalized linear mixed models when the sample size is small.  相似文献   
65.
A unified approach is developed for testing hypotheses in the general linear model based on the ranks of the residuals. It complements the nonparametric estimation procedures recently reported in the literature. The testing and estimation procedures together provide a robust alternative to least squares. The methods are similar in spirit to least squares so that results are simple to interpret. Hypotheses concerning a subset of specified parameters can be tested, while the remaining parameters are treated as nuisance parameters. Asymptotically, the test statistic is shown to have a chi-square distribution under the null hypothesis. This result is then extended to cover a sequence of contiguous alternatives from which the Pitman efficacy is derived. The general application of the test requires the consistent estimation of a functional of the underlying distribution and one such estimate is furnished.  相似文献   
66.
Mild to moderate skew in errors can substantially impact regression mixture model results; one approach for overcoming this includes transforming the outcome into an ordered categorical variable and using a polytomous regression mixture model. This is effective for retaining differential effects in the population; however, bias in parameter estimates and model fit warrant further examination of this approach at higher levels of skew. The current study used Monte Carlo simulations; 3000 observations were drawn from each of two subpopulations differing in the effect of X on Y. Five hundred simulations were performed in each of the 10 scenarios varying in levels of skew in one or both classes. Model comparison criteria supported the accurate two-class model, preserving the differential effects, while parameter estimates were notably biased. The appropriate number of effects can be captured with this approach but we suggest caution when interpreting the magnitude of the effects.  相似文献   
67.
Recursive partitioning algorithms separate a feature space into a set of disjoint rectangles. Then, usually, a constant in every partition is fitted. While this is a simple and intuitive approach, it may still lack interpretability as to how a specific relationship between dependent and independent variables may look. Or it may be that a certain model is assumed or of interest and there is a number of candidate variables that may non-linearly give rise to different model parameter values. We present an approach that combines generalized linear models (GLM) with recursive partitioning that offers enhanced interpretability of classical trees as well as providing an explorative way to assess a candidate variable's influence on a parametric model. This method conducts recursive partitioning of a GLM by (1) fitting the model to the data set, (2) testing for parameter instability over a set of partitioning variables, (3) splitting the data set with respect to the variable associated with the highest instability. The outcome is a tree where each terminal node is associated with a GLM. We will show the method's versatility and suitability to gain additional insight into the relationship of dependent and independent variables by two examples, modelling voting behaviour and a failure model for debt amortization, and compare it to alternative approaches.  相似文献   
68.
The easily computed, one-sided confidence interval for the binomial parameter provides the basis for an interesting classroom example of scientific thinking and its relationship to confidence intervals. The upper limit can be represented as the sample proportion from a number of “successes” in a future experiment of the same sample size. The upper limit reported by most people corresponds closely to that producing a 95 percent classical confidence interval and has a Bayesian interpretation.  相似文献   
69.
The use of the correlation coefficient is suggested as a technique for summarizing and objectively evaluating the information contained in probability plots. Goodness-of-fit tests are constructed using this technique for several commonly used plotting positions for the normal distribution. Empirical sampling methods are used to construct the null distribution for these tests, which are then compared on the basis of power against certain nonnormal alternatives. Commonly used regression tests of fit are also included in the comparisons. The results indicate that use of the plotting position pi = (i - .375)/(n + .25) yields a competitive regression test of fit for normality.  相似文献   
70.
We compare the performance of seven robust estimators for the parameter of an exponential distribution. These include the debiased median and two optimally-weighted one-sided trimmed means. We also introduce four new estimators: the Transform, Bayes, Scaled and Bicube estimators. We make the Monte Carlo comparisons for three sample sizes and six situations. We evaluate the comparisons in terms of a new performance measure, Mean Absolute Differential Error (MADE), and a premium/protection interpretation of MADE. We organize the comparisons to enhance statistical power by making maximal use of common random deviates. The Transform estimator provides the best performance as judged by MADE. The singly-trimmed mean and Transform method define the efficient frontier of premium/protection.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号