首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   9996篇
  免费   178篇
  国内免费   41篇
管理学   419篇
劳动科学   1篇
民族学   53篇
人口学   190篇
丛书文集   350篇
理论方法论   213篇
综合类   2505篇
社会学   427篇
统计学   6057篇
  2024年   11篇
  2023年   71篇
  2022年   78篇
  2021年   95篇
  2020年   178篇
  2019年   285篇
  2018年   389篇
  2017年   717篇
  2016年   255篇
  2015年   257篇
  2014年   376篇
  2013年   2778篇
  2012年   779篇
  2011年   379篇
  2010年   302篇
  2009年   336篇
  2008年   320篇
  2007年   335篇
  2006年   300篇
  2005年   290篇
  2004年   254篇
  2003年   230篇
  2002年   210篇
  2001年   197篇
  2000年   167篇
  1999年   89篇
  1998年   79篇
  1997年   56篇
  1996年   50篇
  1995年   44篇
  1994年   29篇
  1993年   32篇
  1992年   34篇
  1991年   26篇
  1990年   29篇
  1989年   25篇
  1988年   17篇
  1987年   17篇
  1986年   8篇
  1985年   11篇
  1984年   15篇
  1983年   15篇
  1982年   7篇
  1981年   7篇
  1980年   9篇
  1979年   6篇
  1978年   6篇
  1977年   6篇
  1976年   4篇
  1975年   5篇
排序方式: 共有10000条查询结果,搜索用时 31 毫秒
111.
In this article, a system that consists of n independent components each having two dependent subcomponents (Ai, Bi), i = 1, …, n is considered. The system is assumed to compose of components that have two correlated subcomponents (Ai, Bi), and functions iff both systems of subcomponents A1, A2, …, An and B1, B2, …, Bn work under certain structural rules. The expressions for reliability and mean time to failure of such systems are obtained. A sufficient condition to compare two systems of bivariate components in terms of stochastic ordering is also presented.  相似文献   
112.
In regression analysis, it is assumed that the response (or dependent variable) distribution is Normal, and errors are homoscedastic and uncorrelated. However, in practice, these assumptions are rarely satisfied by a real data set. To stabilize the heteroscedastic response variance, generally, log-transformation is suggested. Consequently, the response variable distribution approaches nearer to the Normal distribution. As a result, the model fit of the data is improved. Practically, a proper (seems to be suitable) transformation may not always stabilize the variance, and the response distribution may not reduce to Normal distribution. The present article assumes that the response distribution is log-normal with compound autocorrelated errors. Under these situations, estimation and testing of hypotheses regarding regression parameters have been derived. From a set of reduced data, we have derived the best linear unbiased estimators of all the regression coefficients, except the intercept which is often unimportant in practice. Unknown correlation parameters have been estimated. In this connection, we have derived a test rule for testing any set of linear hypotheses of the unknown regression coefficients. In addition, we have developed the confidence ellipsoids of a set of estimable functions of regression coefficients. For the fitted regression equation, an index of fit has been proposed. A simulated study illustrates the results derived in this report.  相似文献   
113.
The marginal likelihood can be notoriously difficult to compute, and particularly so in high-dimensional problems. Chib and Jeliazkov employed the local reversibility of the Metropolis–Hastings algorithm to construct an estimator in models where full conditional densities are not available analytically. The estimator is free of distributional assumptions and is directly linked to the simulation algorithm. However, it generally requires a sequence of reduced Markov chain Monte Carlo runs which makes the method computationally demanding especially in cases when the parameter space is large. In this article, we study the implementation of this estimator on latent variable models which embed independence of the responses to the observables given the latent variables (conditional or local independence). This property is employed in the construction of a multi-block Metropolis-within-Gibbs algorithm that allows to compute the estimator in a single run, regardless of the dimensionality of the parameter space. The counterpart one-block algorithm is also considered here, by pointing out the difference between the two approaches. The paper closes with the illustration of the estimator in simulated and real-life data sets.  相似文献   
114.
115.
Multivariate density estimation plays an important role in investigating the mechanism of high-dimensional data. This article describes a nonparametric Bayesian approach to the estimation of multivariate densities. A general procedure is proposed for constructing Feller priors for multivariate densities and their theoretical properties as nonparametric priors are established. A blocked Gibbs sampling algorithm is devised to sample from the posterior of the multivariate density. A simulation study is conducted to evaluate the performance of the procedure.  相似文献   
116.
This article suggests an efficient method of estimating a rare sensitive attribute which is assumed following Poisson distribution by using three-stage unrelated randomized response model instead of the Land et al. model (2011 Land, M., S. Singh, and S. A. Sedory. 2011. Estimation of a rare sensitive attribute using poisson distribution. Statistics 46 (3):35160. doi:10.1080/02331888.2010.524300.[Taylor &; Francis Online], [Web of Science ®] [Google Scholar]) when the population consists of some different sized clusters and clusters selected by probability proportional to size(:pps) sampling. A rare sensitive parameter is estimated by using pps sampling and equal probability two-stage sampling when the parameter of a rare unrelated attribute is assumed to be known and unknown.

We extend this method to the case of stratified population by applying stratified pps sampling and stratified equal probability two-stage sampling. An empirical study is carried out to show the efficiency of the two proposed methods when the parameter of a rare unrelated attribute is assumed to be known and unknown.  相似文献   
117.
This paper revisits two bivariate Pareto models for fitting competing risks data. The first model is the Frank copula model, and the second one is a bivariate Pareto model introduced by Sankaran and Nair (1993 Sankaran, P. G., and N. U. Nair. 1993. A bivariate Pareto model and its applications to reliability. Naval Research Logistics 40 (7):10131020. doi:10.1002/1520-6750(199312)40:7%3c1013::AID-NAV3220400711%3e3.0.CO;2-7.[Crossref], [Web of Science ®] [Google Scholar]). We discuss the identifiability issues of these models and develop the maximum likelihood estimation procedures including their computational algorithms and model-diagnostic procedures. Simulations are conducted to examine the performance of the maximum likelihood estimation. Real data are analyzed for illustration.  相似文献   
118.
In many industrial quality control experiments and destructive stress testing, the only available data are successive minima (or maxima)i.e., record-breaking data. There are two sampling schemes used to collect record-breaking data: random sampling and inverse sampling. For random sampling, the total sample size is predetermined and the number of records is a random variable while in inverse-sampling the number of records to be observed is predetermined; thus the sample size is a random variable. The purpose of this papper is to determinevia simulations, which of the two schemes, if any, is more efficient. Since the two schemes are equivalent asymptotically, the simulations were carried out for small to moderate sized record-breaking samples. Simulated biases and mean square errors of the maximum likelihood estimators of the parameters using the two sampling schemes were compared. In general, it was found that if the estimators were well behaved, then there was no significant difference between the mean square errors of the estimates for the two schemes. However, for certain distributions described by both a shape and a scale parameter, random sampling led to estimators that were inconsistent. On the other hand, the estimated obtained from inverse sampling were always consistent. Moreover, for moderated sized record-breaking samples, the total sample size that needs to be observed is smaller for inverse sampling than for random sampling.  相似文献   
119.
When a spatial point process model is fitted to spatial point pattern data using standard software, the parameter estimates are typically biased. Contrary to folklore, the bias does not reflect weaknesses of the underlying mathematical methods, but is mainly due to the effects of discretization of the spatial domain. We investigate two approaches to correcting the bias: a Newton–Raphson-type correction and Richardson extrapolation. In simulation experiments, Richardson extrapolation performs best.  相似文献   
120.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号