首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   3104篇
  免费   84篇
  国内免费   19篇
管理学   137篇
民族学   14篇
人口学   62篇
丛书文集   238篇
理论方法论   131篇
综合类   1436篇
社会学   418篇
统计学   771篇
  2024年   3篇
  2023年   13篇
  2022年   23篇
  2021年   26篇
  2020年   50篇
  2019年   81篇
  2018年   80篇
  2017年   88篇
  2016年   89篇
  2015年   78篇
  2014年   134篇
  2013年   416篇
  2012年   196篇
  2011年   186篇
  2010年   150篇
  2009年   153篇
  2008年   161篇
  2007年   156篇
  2006年   172篇
  2005年   151篇
  2004年   164篇
  2003年   145篇
  2002年   132篇
  2001年   97篇
  2000年   68篇
  1999年   48篇
  1998年   23篇
  1997年   15篇
  1996年   25篇
  1995年   14篇
  1994年   5篇
  1993年   8篇
  1992年   12篇
  1991年   7篇
  1990年   4篇
  1989年   4篇
  1988年   4篇
  1987年   7篇
  1986年   4篇
  1985年   4篇
  1984年   1篇
  1983年   1篇
  1982年   2篇
  1981年   3篇
  1979年   1篇
  1978年   2篇
  1975年   1篇
排序方式: 共有3207条查询结果,搜索用时 0 毫秒
31.
In this paper, we consider the estimation of the stress–strength parameter R=P(Y<X) when X and Y are independent and both are modified Weibull distributions with the common two shape parameters but different scale parameters. The Markov Chain Monte Carlo sampling method is used for posterior inference of the reliability of the stress–strength model. The maximum-likelihood estimator of R and its asymptotic distribution are obtained. Based on the asymptotic distribution, the confidence interval of R can be obtained using the delta method. We also propose a bootstrap confidence interval of R. The Bayesian estimators with balanced loss function, using informative and non-informative priors, are derived. Different methods and the corresponding confidence intervals are compared using Monte Carlo simulations.  相似文献   
32.
Estimators are often defined as the solutions to data dependent optimization problems. A common form of objective function (function to be optimized) that arises in statistical estimation is the sum of a convex function V and a quadratic complexity penalty. A standard paradigm for creating kernel-based estimators leads to such an optimization problem. This article describes an optimization algorithm designed for unconstrained optimization problems in which the objective function is the sum of a non negative convex function and a known quadratic penalty. The algorithm is described and compared with BFGS on some penalized logistic regression and penalized L 3/2 regression problems.  相似文献   
33.
Testing the equal means hypothesis of a bivariate normal distribution with homoscedastic varlates when the data are incomplete is considered. If the correlational parameter, ρ, is known, the well-known theory of the general linear model is easily employed to construct the likelihood ratio test for the two sided alternative. A statistic, T, for the case of ρ unknown is proposed by direct analogy to the likelihood ratio statistic when ρ is known. The null and nonnull distribution of T is investigated by Monte Carlo techniques. It is concluded that T may be compared to the conventional t distribution for testing the null hypothesis and that this procedure results in a substantial increase in power-efficiency over the procedure based on the paired t test which ignores the incomplete data. A Monte Carlo comparison to two statistics proposed by Lin and Stivers (1974) suggests that the test based on T is more conservative than either of their statistics.  相似文献   
34.
Graphical methods have played a central role in the development of statistical theory and practice. This presentation briefly reviews some of the highlights in the historical development of statistical graphics and gives a simple taxonomy that can be used to characterize the current use of graphical methods. This taxonomy is used to describe the evolution of the use of graphics in some major statistical and related scientific journals.

Some recent advances in the use of graphical methods for statistical analysis are reviewed, and several graphical methods for the statistical presentation of data are illustrated, including the use of multicolor maps.  相似文献   
35.
This article gives a method for obtaining accurate (5 decimal places) estimates of nine common cumulative distributions. Starting with a positive series expansion, we use the common ratio of each term to the preceding term and proceed as with a geometric series (the ratio may involve the term number). This avoids calculating terms in the the numerator or denominator which can be large enough to overflow or small enough to underflow the machine. The method is fast because it eliminates the necessity of calculating each term of the series in its entirety.  相似文献   
36.
Using Monte Carlo methods, the properties of systemwise generalisations of the Breusch-Godfrey test for autocorrelated errors are studied in situations when the error terms follow either normal or non-normal distributions, and when these errors follow either AR(1) or MA(1) processes. Edgerton and Shukur (1999) studied the properties of the test using normally distributed error terms and when these errors follow an AR(1) process. When the errors follow a non-normal distribution, the performances of the tests deteriorate especially when the tails are very heavy. The performances of the tests become better (as in the case when the errors are generated by the normal distribution) when the errors are less heavy tailed.  相似文献   
37.
We study the influence of a single data case on the results of a statistical analysis. This problem has been addressed in several articles for linear discriminant analysis (LDA). Kernel Fisher discriminant analysis (KFDA) is a kernel based extension of LDA. In this article, we study the effect of atypical data points on KFDA and develop criteria for identification of cases having a detrimental effect on the classification performance of the KFDA classifier. We find that the criteria are successful in identifying cases whose omission from the training data prior to obtaining the KFDA classifier results in reduced error rates.  相似文献   
38.
ABSTRACT

In a changing climate, changes in timing of seasonal events such as floods and flowering should be assessed using circular methods. Six different methods for clustering on a circle and one linear method are compared across different locations, spreads, and sample sizes. Best results are obtained when clusters are well separated and the number of observations in each cluster is approximately equal. Simulations of flood-like distributions are used to assess and explore clustering methods. Generally, k-means provides results that are close to the expected results, some other methods perform well under specific conditions, but no single method is exemplary.  相似文献   
39.
Abstract

Imputation methods for missing data on a time-dependent variable within time-dependent Cox models are investigated in a simulation study. Quality of life (QoL) assessments were removed from the complete simulated datasets, which have a positive relationship between QoL and disease-free survival (DFS) and delayed chemotherapy and DFS, by missing at random and missing not at random (MNAR) mechanisms. Standard imputation methods were applied before analysis. Method performance was influenced by missing data mechanism, with one exception for simple imputation. The greatest bias occurred under MNAR and large effect sizes. It is important to carefully investigate the missing data mechanism.  相似文献   
40.
The performance of the balanced half-sample, jackknife and linearization methods for estimating the variance of the combined ratio estimate is studied by means of a computer simulation using artificially generated non-normally distributed populations.

The results of this investigation demonstrate that the variance estimates for the combined ratio estimate may be highly biased and unstable when the underlying distributions are non-normal. This is particularly true when the number of observations available from each stratum is small. The jack-  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号