首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
  示例: 沙坡头地区,人工植被区,变化  检索词用空格隔开表示必须包含全部检索词,用“,”隔开表示只需满足任一检索词即可!
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   2660篇
  免费   21篇
管理学   397篇
民族学   13篇
人口学   210篇
丛书文集   11篇
理论方法论   318篇
综合类   61篇
社会学   1040篇
统计学   631篇
  2024年   16篇
  2023年   19篇
  2021年   25篇
  2020年   45篇
  2019年   72篇
  2018年   61篇
  2017年   79篇
  2016年   65篇
  2015年   50篇
  2014年   65篇
  2013年   459篇
  2012年   106篇
  2011年   78篇
  2010年   66篇
  2009年   60篇
  2008年   54篇
  2007年   61篇
  2006年   69篇
  2005年   42篇
  2004年   56篇
  2003年   48篇
  2002年   45篇
  2001年   54篇
  2000年   67篇
  1999年   75篇
  1998年   56篇
  1997年   43篇
  1996年   42篇
  1995年   47篇
  1994年   41篇
  1993年   39篇
  1992年   36篇
  1991年   37篇
  1990年   35篇
  1989年   21篇
  1988年   29篇
  1987年   37篇
  1986年   28篇
  1985年   36篇
  1984年   35篇
  1983年   29篇
  1982年   29篇
  1981年   20篇
  1980年   20篇
  1979年   17篇
  1978年   20篇
  1977年   17篇
  1976年   19篇
  1974年   17篇
  1973年   14篇
排序方式: 共有2681条查询结果,搜索用时 15 毫秒
21.
Suppose that the length of time in years for which a business operates until failure has a Pareto distribution. Let x1 ≤ x2 x3 ≤…≤zk denote the survival lifetimes of the first k of a random sample of n businesses. Bayesian predictions are to be made on the ordered failure times of t h e remaining (n-k) businesses, using the conditional probability density function. Examples are given to illustrate our results.  相似文献   
22.
The treatment sum of squares in the one-way analysis of variance can be expressed in two different ways: as a sum of comparisons between each treatment and the remaining treatments combined, or as a sum of comparisons between the treatments two at a time. When comparisons between treatments are made with the Wilcoxon rank sum statistic, these two expressions lead to two different tests; namely, that of Kruskal and Wallis and one which is essentially the same as that proposed by Crouse (1961,1966). The latter statistic is known to be asymptotically distributed as a chi-squared variable when the numbers of replicates are large. Here it is shown to be asymptotically normal when the replicates are few but the number of treatments is large. For all combinations of numbers of replicates and treatments its empirical distribution is well approximated by a beta distribution  相似文献   
23.
24.
The theory in Part I contained an error that was inferred from the output of a program, written in SAS by Eric P. Smith and David D. Morris. The program produces random BUS designs in accordance with the algorithm of Part I. The theory is here corrected by using a combinatorial argument that involves elementary number theory. The algorithm needs no change but its interpretation is now adjusted.  相似文献   
25.
The investigation of multi-parameter likelihood functions is simplified if the log likelihood is quadratic near the maximum, as then normal approximations to the likelihood can be accurately used to obtain quantities such as likelihood regions. This paper proposes that data-based transformations of the parameters can be employed to make the log likelihood more quadratic, and illustrates the method with one of the simplest bivariate likelihoods, the normal two-parameter likelihood.  相似文献   
26.
Let F(x) be a life distribution. An exact test is given for testing H0 F is exponential, versusH1Fε NBUE (NWUE); along with a table of critical values for n=5(l)80, and n=80(5)65. An asymptotic test is made available for large values of n, where the standardized normal table can be used for testing.  相似文献   
27.
A 2 2 2 contingency table can often be analysed in an exact fashion by using Fisher's exact test and in an approximate fashion by using the chi-squared test with Yates' continuity correction, and it is traditionally held that the approximation is valid when the minimum expected quantity E is E S 5. Unfortunately, little research has been carried out into this belief, other than that it is necessary to establish a bound E>E*, that the condition E S 5 may not be the most appropriate (Martín Andrés et al., 1992) and that E* is not a constant, but usually increasing with the growth of the sample size (Martín Andrés & Herranz Tejedor, 1997). In this paper, the authors conduct a theoretical experimental study from which they ascertain that E* value (which is very variable and frequently quite a lot greater than 5) is strongly related to the magnitude of the skewness of the underlying hypergeometric distribution, and that bounding the skewness is equivalent to bounding E (which is the best control procedure). The study enables estimating the expression for the above-mentioned E* (which in turn depends on the number of tails in the test, the alpha error used, the total sample size, and the minimum marginal imbalance) to be estimated. Also the authors show that E* increases generally with the sample size and with the marginal imbalance, although it does reach a maximum. Some general and very conservative validity conditions are E S 35.53 (one-tailed test) and E S 7.45 (two-tailed test) for alpha nominal errors in 1% h f h 10%. The traditional condition E S 5 is only valid when the samples are small and one of the marginals is very balanced; alternatively, the condition E S 5.5 is valid for small samples or a very balanced marginal. Finally, it is proved that the chi-squared test is always valid in tables where both marginals are balanced, and that the maximum skewness permitted is related to the maximum value of the bound E*, to its value for tables with at least one balanced marginal and to the minimum value that those marginals must have (in non-balanced tables) for the chi-squared test to be valid.  相似文献   
28.
Classical nondecimated wavelet transforms are attractive for many applications. When the data comes from complex or irregular designs, the use of second generation wavelets in nonparametric regression has proved superior to that of classical wavelets. However, the construction of a nondecimated second generation wavelet transform is not obvious. In this paper we propose a new ‘nondecimated’ lifting transform, based on the lifting algorithm which removes one coefficient at a time, and explore its behavior. Our approach also allows for embedding adaptivity in the transform, i.e. wavelet functions can be constructed such that their smoothness adjusts to the local properties of the signal. We address the problem of nonparametric regression and propose an (averaged) estimator obtained by using our nondecimated lifting technique teamed with empirical Bayes shrinkage. Simulations show that our proposed method has higher performance than competing techniques able to work on irregular data. Our construction also opens avenues for generating a ‘best’ representation, which we shall explore.  相似文献   
29.
In this article, we introduce three new distribution-free Shewhart-type control charts that exploit run and Wilcoxon-type rank-sum statistics to detect possible shifts of a monitored process. Exact formulae for the alarm rate, the run length distribution, and the average run length (ARL) are all derived. A key advantage of these charts is that, due to their nonparametric nature, the false alarm rate (FAR) and in-control run length distribution is the same for all continuous process distributions. Tables are provided for the implementation of the charts for some typical FAR values. Furthermore, a numerical study carried out reveals that the new charts are quite flexible and efficient in detecting shifts to Lehmann-type out-of-control situations.  相似文献   
30.
Model checking with discrete data regressions can be difficult because the usual methods such as residual plots have complicated reference distributions that depend on the parameters in the model. Posterior predictive checks have been proposed as a Bayesian way to average the results of goodness-of-fit tests in the presence of uncertainty in estimation of the parameters. We try this approach using a variety of discrepancy variables for generalized linear models fitted to a historical data set on behavioural learning. We then discuss the general applicability of our findings in the context of a recent applied example on which we have worked. We find that the following discrepancy variables work well, in the sense of being easy to interpret and sensitive to important model failures: structured displays of the entire data set, general discrepancy variables based on plots of binned or smoothed residuals versus predictors and specific discrepancy variables created on the basis of the particular concerns arising in an application. Plots of binned residuals are especially easy to use because their predictive distributions under the model are sufficiently simple that model checks can often be made implicitly. The following discrepancy variables did not work well: scatterplots of latent residuals defined from an underlying continuous model and quantile–quantile plots of these residuals.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号