首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   38823篇
  免费   562篇
  国内免费   1篇
管理学   5454篇
民族学   231篇
人才学   1篇
人口学   4839篇
丛书文集   129篇
教育普及   2篇
理论方法论   3073篇
现状及发展   1篇
综合类   569篇
社会学   18401篇
统计学   6686篇
  2023年   195篇
  2022年   137篇
  2021年   176篇
  2020年   481篇
  2019年   696篇
  2018年   2316篇
  2017年   2589篇
  2016年   1780篇
  2015年   612篇
  2014年   821篇
  2013年   5226篇
  2012年   1328篇
  2011年   1947篇
  2010年   1659篇
  2009年   1330篇
  2008年   1491篇
  2007年   1617篇
  2006年   726篇
  2005年   836篇
  2004年   858篇
  2003年   728篇
  2002年   662篇
  2001年   718篇
  2000年   666篇
  1999年   627篇
  1998年   467篇
  1997年   411篇
  1996年   449篇
  1995年   431篇
  1994年   358篇
  1993年   390篇
  1992年   431篇
  1991年   421篇
  1990年   426篇
  1989年   403篇
  1988年   388篇
  1987年   320篇
  1986年   361篇
  1985年   393篇
  1984年   357篇
  1983年   333篇
  1982年   265篇
  1981年   221篇
  1980年   247篇
  1979年   296篇
  1978年   212篇
  1977年   197篇
  1976年   171篇
  1975年   165篇
  1974年   162篇
排序方式: 共有10000条查询结果,搜索用时 15 毫秒
771.
The Cornish-Fisher expansion of the Pearson type VI distribution is known to be reasonably accurate when both degrees of freedom are relatively large (say greater than or equal to 5). However, when either or both degrees of freedom are less than 5, the accuracy of the computed percentage point begins to suffer; in some cases severely. To correct for this, the error surface in the degrees of freedom plane is modeled by least squares curve fitting for selected levels of tail probability (.025, .05, and .10) which can be used to adjust the percentage point obtained from the usual Cornish-Fisher expansion. This adjustment procedure produces a computing algorithm that computes percentage points of the Pearson type VI distribution at the above probability levels, accurate to at least + 1 in 3 digits in approximately 11 milliseconds per subroutine call on an IBM 370/145. This adjusted routine is valid for both degrees of freedom greater than or equal to 1.  相似文献   
772.
Supersaturated designs are a large class of factorial designs which can be used for screening out the important factors from a large set of potentially active variables. The huge advantage of these designs is that they reduce the experimental cost drastically, but their critical disadvantage is the confounding involved in the statistical analysis. In this article, we propose a method for analyzing data using several types of supersaturated designs. Modifications of widely used information criteria are given and applied to the variable selection procedure for the identification of the active factors. The effectiveness of the proposed method is depicted via simulated experiments and comparisons.  相似文献   
773.
The cost and time consumption of many industrial experimentations can be reduced using the class of supersaturated designs since this can be used for screening out the important factors from a large set of potentially active variables. A supersaturated design is a design for which there are fewer runs than effects to be estimated. Although there exists a wide study of construction methods for supersaturated designs, their analysis methods are yet in an early research stage. In this article, we propose a method for analyzing data using a correlation-based measure, named as symmetrical uncertainty. This method combines measures from the information theory field and is used as the main idea of variable selection algorithms developed in data mining. In this work, the symmetrical uncertainty is used from another viewpoint in order to determine more directly the important factors. The specific method enables us to use supersaturated designs for analyzing data of generalized linear models for a Bernoulli response. We evaluate our method by using some of the existing supersaturated designs, obtained according to methods proposed by Tang and Wu (1997 Tang , B. , Wu , C. F. J. (1997). A method for constructing supersaturated designs and its E(s 2)-optimality. Canadian Journal of Statistics 25:191201.[Crossref], [Web of Science ®] [Google Scholar]) as well as by Koukouvinos et al. (2008 Koukouvinos , C. , Mylona , K. , Simos , D. E. ( 2008 ). E(s 2)-optimal and minimax-optimal cyclic supersaturated designs via multi-objective simulated annealing . Journal of Statistical Planning and Inference 138 : 16391646 .[Crossref], [Web of Science ®] [Google Scholar]). The comparison is performed by some simulating experiments and the Type I and Type II error rates are calculated. Additionally, Receiver Operating Characteristics (ROC) curves methodology is applied as an additional statistical tool for performance evaluation.  相似文献   
774.
Inference in generalized linear mixed models with crossed random effects is often made cumbersome by the high-dimensional intractable integrals involved in the marginal likelihood. This article presents two inferential approaches based on the marginal composite likelihood for the normal Bradley-Terry model. The two approaches are illustrated by a simulation study to evaluate their performance. Thereafter, the asymptotic variances of the estimated variance component are compared.  相似文献   
775.
In this article, we develop a model to study treatment, period, carryover, and other applicable effects in a crossover design with a time-to-event response variable. Because time-to-event outcomes on different treatment regimens within the crossover design are correlated for an individual, we adopt a proportional hazards frailty model. If the frailty is assumed to have a gamma distribution, and the hazard rates are piecewise constant, then the likelihood function can be determined via closed-form expressions. We illustrate the methodology via an application to a data set from an asthma clinical trial and run simulations that investigate sensitivity of the model to data generated from different distributions.  相似文献   
776.
Modeling data that are non-normally distributed with random effects is the major challenge in analyzing binomial data in split-plot designs. Seven methods for analyzing such data using mixed, generalized linear, or generalized linear mixed models are compared for the size and power of the tests. This study shows that analyzing random effects properly is more important than adjusting the analysis for non-normality. Methods based on mixed and generalized linear mixed models hold Type I error rates better than generalized linear models. Mixed model methods tend to have higher power than generalized linear mixed models when the sample size is small.  相似文献   
777.
Response surface methodology is widely used for developing, improving, and optimizing processes in various fields. In this article, we present a method for constructing three-level designs in order to explore and optimize response surfaces combining orthogonal arrays and covering arrays in a particular manner. The produced designs achieve the properties of rotatability, predictive performance and efficiency for the estimation of a second-order model.  相似文献   
778.
In some practical inferential situations, it is needed to mix some finite sort of distributions to fit an adequate model for multi-modal observations. In this article, using evidential analysis, we determine the sample size for supporting hypotheses about the mixture proportion and homogeneity. An Expectation-Maximization algorithm is used to evaluate the probability of strong misleading evidence based on modified likelihood ratio as a measure of support.  相似文献   
779.
In this paper we consider the problem of determining the optimum number of repairable and replaceable components to maximize a system's reliability when both, the cost of repairing the components and the cost of replacement of components by new ones, are random. We formulate it as a problem of non-linear stochastic programming. The solution is obtained through Chance Constrained programming. We also consider the problem of finding the optimal maintenance cost for a given reliability requirement of the system. The solution is then obtained by using Modified E-model. A numerical example is solved for both the formulations.  相似文献   
780.
In this article, the approaches for exploiting mixtures of mixtures are expanded by using the Multiresolution family of probability density functions (MR pdf). The flexibility and the properties of local analysis of the MR pdf facilitate the location of subpopulations into a given population. In order to do this, two algorithms are provided.

The MR model is more flexible in adapting to the different subpopulations than the traditional mixtures. In addition, the problems of identification of mixtures distributions and the label-switching do not appear in the MR pdf context.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号