首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   5241篇
  免费   227篇
  国内免费   52篇
管理学   281篇
劳动科学   15篇
民族学   199篇
人口学   82篇
丛书文集   1454篇
理论方法论   257篇
综合类   2569篇
社会学   218篇
统计学   445篇
  2024年   2篇
  2023年   25篇
  2022年   72篇
  2021年   67篇
  2020年   71篇
  2019年   63篇
  2018年   90篇
  2017年   124篇
  2016年   87篇
  2015年   180篇
  2014年   177篇
  2013年   312篇
  2012年   289篇
  2011年   400篇
  2010年   402篇
  2009年   422篇
  2008年   324篇
  2007年   384篇
  2006年   393篇
  2005年   367篇
  2004年   177篇
  2003年   160篇
  2002年   215篇
  2001年   208篇
  2000年   96篇
  1999年   78篇
  1998年   54篇
  1997年   50篇
  1996年   46篇
  1995年   36篇
  1994年   30篇
  1993年   26篇
  1992年   26篇
  1991年   16篇
  1990年   13篇
  1989年   8篇
  1988年   10篇
  1987年   7篇
  1986年   2篇
  1985年   6篇
  1984年   4篇
  1983年   1篇
排序方式: 共有5520条查询结果,搜索用时 15 毫秒
41.
In high dimensional classification problem, two stage method, reducing the dimension of predictor first and then applying the classification method, is a natural solution and has been widely used in many fields. The consistency of the two stage method is an important issue, since errors induced by dimension reduction method inevitably have impacts on the following classification method. As an effective method for classification problem, boosting has been widely used in practice. In this paper, we study the consistency of two stage method–dimension reduction based boosting algorithm (briefly DRB) for classification problem. Theoretical results show that Lipschitz condition on the base learner is required to guarantee the consistency of DRB. This theoretical findings provide useful guideline for application.  相似文献   
42.
The semi‐Markov process often provides a better framework than the classical Markov process for the analysis of events with multiple states. The purpose of this paper is twofold. First, we show that in the presence of right censoring, when the right end‐point of the support of the censoring time is strictly less than the right end‐point of the support of the semi‐Markov kernel, the transition probability of the semi‐Markov process is nonidentifiable, and the estimators proposed in the literature are inconsistent in general. We derive the set of all attainable values for the transition probability based on the censored data, and we propose a nonparametric inference procedure for the transition probability using this set. Second, the conventional approach to constructing confidence bands is not applicable for the semi‐Markov kernel and the sojourn time distribution. We propose new perturbation resampling methods to construct these confidence bands. Different weights and transformations are explored in the construction. We use simulation to examine our proposals and illustrate them with hospitalization data from a recent cancer survivor study. The Canadian Journal of Statistics 41: 237–256; 2013 © 2013 Statistical Society of Canada  相似文献   
43.
In this paper, the generalized varying-coefficient single-index model is discussed based on penalized likelihood. All the unknown functions are fitted by penalized spline. The estimates of the unknown parameters and the unknown coefficient functions are obtained and the estimation approach is rapid and computationally stable. Under some mild conditions, the consistency and the asymptotic normality of these resulting estimators are given. Two simulation studies are carried out to illustrate the performance of the estimates. An application of the model to the Hong Kong environmental data further demonstrates the potential of the proposed modelling procedures.  相似文献   
44.
For linear regression models with non normally distributed errors, the least squares estimate (LSE) will lose some efficiency compared to the maximum likelihood estimate (MLE). In this article, we propose a kernel density-based regression estimate (KDRE) that is adaptive to the unknown error distribution. The key idea is to approximate the likelihood function by using a nonparametric kernel density estimate of the error density based on some initial parameter estimate. The proposed estimate is shown to be asymptotically as efficient as the oracle MLE which assumes the error density were known. In addition, we propose an EM type algorithm to maximize the estimated likelihood function and show that the KDRE can be considered as an iterated weighted least squares estimate, which provides us some insights on the adaptiveness of KDRE to the unknown error distribution. Our Monte Carlo simulation studies show that, while comparable to the traditional LSE for normal errors, the proposed estimation procedure can have substantial efficiency gain for non normal errors. Moreover, the efficiency gain can be achieved even for a small sample size.  相似文献   
45.
Abstract

In statistical hypothesis testing, a p-value is expected to be distributed as the uniform distribution on the interval (0, 1) under the null hypothesis. However, some p-values, such as the generalized p-value and the posterior predictive p-value, cannot be assured of this property. In this paper, we propose an adaptive p-value calibration approach, and show that the calibrated p-value is asymptotically distributed as the uniform distribution. For Behrens–Fisher problem and goodness-of-fit test under a normal model, the calibrated p-values are constructed and their behavior is evaluated numerically. Simulations show that the calibrated p-values are superior than original ones.  相似文献   
46.
In recent years the analysis of interval-censored failure time data has attracted a great deal of attention and such data arise in many fields including demographical studies, economic and financial studies, epidemiological studies, social sciences, and tumorigenicity experiments. This is especially the case in medical studies such as clinical trials. In this article, we discuss regression analysis of one type of such data, Case I interval-censored data, in the presence of left-truncation. For the problem, the additive hazards model is employed and the maximum likelihood method is applied for estimations of unknown parameters. In particular, we adopt the sieve estimation approach that approximates the baseline cumulative hazard function by linear functions. The resulting estimates of regression parameters are shown to be consistent and efficient and have an asymptotic normal distribution. An illustrative example is provided.  相似文献   
47.
This article discusses sampling plans, that is, the allocation of sampling units, for computing tolerance limits in a balanced one--way random-effects model. The expected width of the tolerance interval is derived and used as the basis for comparing different sampling plans. A well-known cost function and examples are used to facilitate the discussion.  相似文献   
48.
This article deals with the multiple-outlier exponential model. The likelihood ratio order between m-spacings of the combined sample is developed, some results extend the conclusions on simple spacings in Wen et al. (2007 Wen , S. , Lu , Q. , Hu , T. ( 2007 ). Likelihood ratio order of spacings of heterogeneous exponential random variables . J. Multivariate Anal. 98 : 743756 .[Crossref], [Web of Science ®] [Google Scholar]).  相似文献   
49.
50.
1991年10月发生在奥克兰地区的大火灾造成25人丧生,6000多人无家可归。幸存者用各种方式表达自己对这场灾难的认识,其中凸显了有关自然与文化、有序与无序、时间与空间以及传统的男女性别区分与生死等二元结构,尤其是对作为自然象征的母亲与作为灾难象征的魔兽二者之间的对立显示了自然与非自然力量的对比。人们为了解释灾难的去而复来,甚至认定灾难的发生是早已注定的、循环的,从而肯定了上帝对于自然灾害发生的控制权,由此,毁灭者也是创造者。奥克兰大火中隐含着如建筑材料的不当使用和居民区的不合理规划等技术性因素,因此其可部分归结为技术性灾难。而由于技术性灾难产生于一个文化的竞技场,而不是自然自身的,在这种情况下,能满足人们心理需求的合乎情理的灾难意象或隐喻非常少,因此,技术性灾难不会流变为神话,而是作为永远的历史而存在。综观上述过程,灾难被重新界定为"创造性的毁灭",从而被赋予了一种令人敬畏的美。尽管并非所有的文化都会以上述方式去象征地表达灾难,但所有的文化在描述与解释灾难时,象征都将会是他们行动的一部分。这些象征的建立缓解或消解了灾民对于灾难的恐惧,这也可以部分解释为什么灾难多发区的居民大多不愿意搬迁。  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号