首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   12382篇
  免费   214篇
  国内免费   2篇
管理学   1941篇
民族学   72篇
人才学   5篇
人口学   1148篇
丛书文集   75篇
理论方法论   1149篇
综合类   230篇
社会学   6332篇
统计学   1646篇
  2020年   156篇
  2019年   254篇
  2018年   274篇
  2017年   389篇
  2016年   280篇
  2015年   217篇
  2014年   321篇
  2013年   1965篇
  2012年   478篇
  2011年   436篇
  2010年   356篇
  2009年   351篇
  2008年   280篇
  2007年   346篇
  2006年   371篇
  2005年   301篇
  2004年   280篇
  2003年   239篇
  2002年   260篇
  2001年   348篇
  2000年   282篇
  1999年   216篇
  1998年   184篇
  1997年   168篇
  1996年   196篇
  1995年   197篇
  1994年   204篇
  1993年   166篇
  1992年   164篇
  1991年   162篇
  1990年   162篇
  1989年   153篇
  1988年   141篇
  1987年   154篇
  1986年   126篇
  1985年   151篇
  1984年   159篇
  1983年   148篇
  1982年   139篇
  1981年   93篇
  1980年   123篇
  1979年   128篇
  1978年   108篇
  1977年   89篇
  1976年   91篇
  1975年   101篇
  1974年   95篇
  1973年   71篇
  1972年   63篇
  1971年   59篇
排序方式: 共有10000条查询结果,搜索用时 0 毫秒
271.
The theory and properties of trend-free (TF) and nearly trend-free (NTF) block designs are wel1 developed. Applications have been hampered because a methodology for design construction has not been available.

This article begins with a short review of concepts and properties of TF and NTF block designs. The major contribution is provision of an algorithm for the construction of linear and nearly linear TF block designs. The algorithm is incorporated in a computer program in FORTRAN 77 provided in an appendix for the IBM PC or compatible microcomputer, a program adaptable also to other computers. Three sets of block designs generated by the program are given as examples.

A numerical example of analysis of a linear trend-free balanced incomplete block design is provided.  相似文献   
272.
In many engineering problems it is necessary to draw statistical inferences on the mean of a lognormal distribution based on a complete sample of observations. Statistical demonstration of mean time to repair (MTTR) is one example. Although optimum confidence intervals and hypothesis tests for the lognormal mean have been developed, they are difficult to use, requiring extensive tables and/or a computer. In this paper, simplified conservative methods for calculating confidence intervals or hypothesis tests for the lognormal mean are presented. In this paper, “conservative” refers to confidence intervals (hypothesis tests) whose infimum coverage probability (supremum probability of rejecting the null hypothesis taken over parameter values under the null hypothesis) equals the nominal level. The term “conservative” has obvious implications to confidence intervals (they are “wider” in some sense than their optimum or exact counterparts). Applying the term “conservative” to hypothesis tests should not be confusing if it is remembered that this implies that their equivalent confidence intervals are conservative. No implication of optimality is intended for these conservative procedures. It is emphasized that these are direct statistical inference methods for the lognormal mean, as opposed to the already well-known methods for the parameters of the underlying normal distribution. The method currently employed in MIL-STD-471A for statistical demonstration of MTTR is analyzed and compared to the new method in terms of asymptotic relative efficiency. The new methods are also compared to the optimum methods derived by Land (1971, 1973).  相似文献   
273.
It appears to be common practice with ridge regression to obtain a decomposition of the total sum of squares, and assign degrees of freedom, according to established least squares theory. This discussion notes the obvious fallacies of such an approach, and introduces a decomposition based on orthogonality, and degrees of freedom based on expected mean squares, for non-stochastic k.  相似文献   
274.
In this paper the single and product moments of order statistics from doubly truncated parabolic and skewed distributions have been obtained. Also the Weibull distribution has been characterized through the properties of order statistics.  相似文献   
275.
In this article the problem of the optimal selection and allocation of time points in repeated measures experiments is considered. D‐ optimal designs for linear regression models with a random intercept and first order auto‐regressive serial correlations are computed numerically and compared with designs having equally spaced time points. When the order of the polynomial is known and the serial correlations are not too small, the comparison shows that for any fixed number of repeated measures, a design with equally spaced time points is almost as efficient as the D‐ optimal design. When, however, there is no prior knowledge about the order of the underlying polynomial, the best choice in terms of efficiency is a D‐ optimal design for the highest possible relevant order of the polynomial. A design with equally‐spaced time points is the second best choice  相似文献   
276.
ABSTRACT

Early detection with a low false alarm rate (FAR) is the main aim of outbreak detection as used in public health surveillance or in regard to bioterrorism. Multivariate surveillance is preferable to univariate surveillance since correlation between series (CBS) is recognized and incorporated. Sufficient reduction has proved a promising method for handling CBS, but has not previously been used when correlation within series (CWS) is present. Here we develop sufficient reduction methods for reducing a p-dimensional multivariate series to a univariate series of statistics shown to be sufficient to monitor a sudden, but persistent, shift in the multivariate series mean. Correlation both within and between series is taken into account, as public health data typically exhibit both forms of association. Simultaneous and lagged changes and different shift sizes are investigated. A one-sided exponentially weighted moving average chart is used as a tool for detection of a change. The performance of the proposed method is compared with existing sufficient reduction methods, the parallel univariate method and both VarR and Z charts. A simulation study using bivariate normal autoregressive data shows that the new method gives shorter delays and a lower FAR than other methods, which have high FARs when CWS is clearly present.  相似文献   
277.
ABSTRACT

A new discrete distribution that depends on two parameters is introduced in this article. From this new distribution the geometric distribution is obtained as a special case. After analyzing some of its properties such as moments and unimodality, recurrences for the probability mass function and differential equations for its probability generating function are derived. In addition to this, parameters are estimated by maximum likelihood estimation numerically maximizing the log-likelihood function. Expected frequencies are calculated for different sets of data to prove the versatility of this discrete model.  相似文献   
278.
ABSTRACT

Competing risks data are common in medical research in which lifetime of individuals can be classified in terms of causes of failure. In survival or reliability studies, it is common that the patients (objects) are subjected to both left censoring and right censoring, which is refereed as double censoring. The analysis of doubly censored competing risks data in presence of covariates is the objective of this study. We propose a proportional hazards model for the analysis of doubly censored competing risks data, using the hazard rate functions of Gray (1988 Gray, R.J. (1988). A class of k-sample tests for comparing the cumulative incidence of a competing risk. Ann. Statist. 16:11411154.[Crossref], [Web of Science ®] [Google Scholar]), while focusing upon one major cause of failure. We derive estimators for regression parameter vector and cumulative baseline cause specific hazard rate function. Asymptotic properties of the estimators are discussed. A simulation study is conducted to assess the finite sample behavior of the proposed estimators. We illustrate the method using a real life doubly censored competing risks data.  相似文献   
279.
ABSTRACT

Discrepancies are measures which are defined as the deviation between the empirical and the theoretical uniform distribution. In this way, discrepancy is a measure of uniformity which provides a way of construction a special kind of space filling designs, namely uniform designs. Several discrepancies have been proposed in recent literature. A brief, selective review of these measures including some construction algorithms are given in this paper. Furthermore, a critical discussion along with some comparisons is provided, as well.  相似文献   
280.
Abstract

Research involving administrative healthcare data to study patient outcomes requires the investigator to account for the patient’s disease burden in order to reduce the potential for biased results. Here we develop a comorbidity summary score based on variable importance measures derived from several statistical and machine learning methods and show it has superior predictive performance to the Elixhauser and Charlson indices when used to predict 1-year, 5-year, and 10-year mortality. We used two large Veterans Administration cohorts to develop and validate the summary score and compared predictive performance using the area under ROC curve (AUC) and the Brier score.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号