首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   7634篇
  免费   114篇
  国内免费   1篇
管理学   1066篇
民族学   44篇
人口学   646篇
丛书文集   23篇
理论方法论   635篇
综合类   70篇
社会学   3230篇
统计学   2035篇
  2023年   44篇
  2021年   49篇
  2020年   108篇
  2019年   172篇
  2018年   188篇
  2017年   277篇
  2016年   194篇
  2015年   129篇
  2014年   201篇
  2013年   1454篇
  2012年   226篇
  2011年   232篇
  2010年   172篇
  2009年   185篇
  2008年   196篇
  2007年   178篇
  2006年   192篇
  2005年   177篇
  2004年   151篇
  2003年   135篇
  2002年   164篇
  2001年   199篇
  2000年   199篇
  1999年   183篇
  1998年   136篇
  1997年   123篇
  1996年   104篇
  1995年   94篇
  1994年   138篇
  1993年   97篇
  1992年   124篇
  1991年   128篇
  1990年   115篇
  1989年   102篇
  1988年   88篇
  1987年   89篇
  1986年   90篇
  1985年   87篇
  1984年   93篇
  1983年   68篇
  1982年   70篇
  1981年   49篇
  1980年   59篇
  1979年   62篇
  1978年   49篇
  1977年   48篇
  1976年   38篇
  1975年   47篇
  1974年   40篇
  1971年   25篇
排序方式: 共有7749条查询结果,搜索用时 13 毫秒
141.
In the estimators t 3 , t 4 , t 5 of Mukerjee, Rao & Vijayan (1987), b y x and b y z are partial regression coefficients of y on x and z , respectively, based on the smaller sample. With the above interpretation of b y x and b y z in t 3 , t 4 , t 5 , all the calculations in Mukerjee at al. (1987) are correct. In this connection, we also wish to make it explicit that b x z in t 5 is an ordinary and not a partial regression coefficient. The 'corrected' MSEs of t 3 , t 4 , t 5 , as given in Ahmed (1998 Section 3) are computed assuming that our b y x and b y z are ordinary and not partial regression coefficients. Indeed, we had no intention of giving estimators using the corresponding ordinary regression coefficients which would lead to estimators inferior to those given by Kiregyera (1984). We accept responsibility for any notational confusion created by us and express regret to readers who have been confused by our notation. Finally, in consideration of the above, it may be noted that Tripathi & Ahmed's (1995) estimator t 0 , quoted also in Ahmed (1998), is no better than t 5 of Mukerjee at al. (1987).  相似文献   
142.
Several models for studies related to tensile strength of materials are proposed in the literature where the size or length component has been taken to be an important factor for studying the specimens’ failure behaviour. An important model, developed on the basis of cumulative damage approach, is the three-parameter extension of the Birnbaum–Saunders fatigue model that incorporates size of the specimen as an additional variable. This model is a strong competitor of the commonly used Weibull model and stands better than the traditional models, which do not incorporate the size effect. The paper considers two such cumulative damage models, checks their compatibility with a real dataset, compares them with some of the recent toolkits, and finally recommends a model, which appears an appropriate one. Throughout the study is Bayesian based on Markov chain Monte Carlo simulation.  相似文献   
143.
144.
A supersaturated design is a design whose run size is not enough for estimating all the main effects. It is commonly used in screening experiments, where the goals are to identify sparse and dominant active factors with low cost. In this paper, we study a variable selection method via the Dantzig selector, proposed by Candes and Tao [2007. The Dantzig selector: statistical estimation when pp is much larger than nn. Annals of Statistics 35, 2313–2351], to screen important effects. A graphical procedure and an automated procedure are suggested to accompany with the method. Simulation shows that this method performs well compared to existing methods in the literature and is more efficient at estimating the model size.  相似文献   
145.
The use of covariates in block designs is necessary when the covariates cannot be controlled like the blocking factor in the experiment. In this paper, we consider the situation where there is some flexibility for selection in the values of the covariates. The choice of values of the covariates for a given block design attaining minimum variance for estimation of each of the parameters has attracted attention in recent times. Optimum covariate designs in simple set-ups such as completely randomised design (CRD), randomised block design (RBD) and some series of balanced incomplete block design (BIBD) have already been considered. In this paper, optimum covariate designs have been considered for the more complex set-ups of different partially balanced incomplete block (PBIB) designs, which are popular among practitioners. The optimum covariate designs depend much on the methods of construction of the basic PBIB designs. Different combinatorial arrangements and tools such as orthogonal arrays, Hadamard matrices and different kinds of products of matrices viz. Khatri–Rao product, Kronecker product have been conveniently used to construct optimum covariate designs with as many covariates as possible.  相似文献   
146.
Most studies of quality improvement deal with ordered categorical data from industrial experiments. Accounting for the ordering of such data plays an important role in effectively determining the optimal factor level of combination. This paper utilizes the correspondence analysis to develop a procedure to improve the ordered categorical response in a multifactor state system based on Taguchi's statistic. Users may find the proposed procedure in this paper to be attractive because we suggest a simple and also popular statistical tool for graphically identifying the really important factors and determining the levels to improve process quality. A case study for optimizing the polysilicon deposition process in a very large-scale integrated circuit is provided to demonstrate the effectiveness of the proposed procedure.  相似文献   
147.
The purpose of this article is to compare efficiencies of several cluster randomized designs using the method of quantile dispersion graphs (QDGs). A cluster randomized design is considered whenever subjects are randomized at a group level but analyzed at the individual level. A prior knowledge of the correlation existing between subjects within the same cluster is necessary to design these cluster randomized trials. Using the QDG approach, we are able to compare several cluster randomized designs without requiring any information on the intracluster correlation. For a given design, several quantiles of the power function, which are directly related to the effect size, are obtained for several effect sizes. The quantiles depend on the intracluster correlation present in the model. The dispersion of these quantiles over the space of the unknown intracluster correlation is determined, and then depicted by the QDGs. Two applications of the proposed methodology are presented.  相似文献   
148.
This installment of "Serials Spoken Here" covers events that transpired between late September and late October 2008. Reported herein are two Webinars, one on ONIX for Serials, the other on SUSHI, and two conferences: the eighty-fourth annual Meeting of the Potomac Technical Processing Librarians and the New England Library Association's Annual Conference.  相似文献   
149.
In this article, we consider Bayesian inference procedures to test for a unit root in Stochastic Volatility (SV) models. Unit-root tests for the persistence parameter of the SV models, based on the Bayes Factor (BF), have been recently introduced in the literature. In contrast, we propose a flexible class of priors that is non-informative over the entire support of the persistence parameter (including the non-stationarity region). In addition, we show that our model fitting procedure is computationally efficient (using the software WinBUGS). Finally, we show that our proposed test procedures have good frequentist properties in terms of achieving high statistical power, while maintaining low total error rates. We illustrate the above features of our method by extensive simulation studies, followed by an application to a real data set on exchange rates.  相似文献   
150.
Goodness-of-fit tests for the family of symmetric normal inverse Gaussian distributions are constructed. The tests are based on a weighted integral incorporating the empirical characteristic function of suitably standardized data. An EM-type algorithm is employed for the estimation of the parameters involved in the test statistic. Monte Carlo results show that the new procedure is competitive with classical goodness-of-fit methods. An application with financial data is also included.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号