首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   1533篇
  免费   47篇
  国内免费   18篇
管理学   211篇
民族学   1篇
人口学   22篇
丛书文集   30篇
理论方法论   79篇
综合类   245篇
社会学   14篇
统计学   996篇
  2023年   6篇
  2022年   8篇
  2021年   12篇
  2020年   33篇
  2019年   47篇
  2018年   49篇
  2017年   94篇
  2016年   40篇
  2015年   44篇
  2014年   42篇
  2013年   379篇
  2012年   123篇
  2011年   46篇
  2010年   49篇
  2009年   54篇
  2008年   59篇
  2007年   63篇
  2006年   64篇
  2005年   41篇
  2004年   32篇
  2003年   35篇
  2002年   36篇
  2001年   32篇
  2000年   18篇
  1999年   20篇
  1998年   13篇
  1997年   16篇
  1996年   12篇
  1995年   13篇
  1994年   15篇
  1993年   13篇
  1992年   19篇
  1991年   10篇
  1990年   5篇
  1989年   5篇
  1988年   10篇
  1987年   6篇
  1986年   5篇
  1985年   6篇
  1984年   6篇
  1983年   3篇
  1982年   3篇
  1981年   4篇
  1980年   3篇
  1979年   1篇
  1978年   1篇
  1977年   2篇
  1975年   1篇
排序方式: 共有1598条查询结果,搜索用时 218 毫秒
31.
Empirical Bayes estimates of the local false discovery rate can reflect uncertainty about the estimated prior by supplementing their Bayesian posterior probabilities with confidence levels as posterior probabilities. This use of coherent fiducial inference with hierarchical models generates set estimators that propagate uncertainty to varying degrees. Some of the set estimates approach estimates from plug-in empirical Bayes methods for high numbers of comparisons and can come close to the usual confidence sets given a sufficiently low number of comparisons.  相似文献   
32.
A large-scale study, in which two million random Voronoi polygons (with respect to a homogeneous Poisson point process) were generated and mensurated, is described. The polygon characteristics recorded are number of sides (or vertices), perimeter, area and interior angles. A feature is the efficient “quantile” method of replicating Poisson-type random structures, which it is hoped may find useful application elsewhere.  相似文献   
33.
In analyzing data from unreplicated factorial designs, the half-normal probability plot is commonly used to screen for the ‘vital few’ effects. Recently, many formal methods have been proposed to overcome the subjectivity of this plot. Lawson (1998) (hereafter denoted as LGB) suggested a hybrid method based on the half-normal probability plot, which is a blend of Lenth (1989) and Loh (1992) method. The method consists of fitting a simple least squares line to the inliers, which are determined by the Lenth method. The effects exceeding the prediction limits based on the fitted line are candidates for the vital few effects. To improve the accuracy of partitioning the effects into inliers and outliers, we propose a modified LGB method (hereafter denoted as the Mod_LGB method), in which more outliers can be classified by using both the Carling’s modification of the box plot (Carling, 2000) and Lenth method. If no outlier exists or there is a wide range in the inliers as determined by the Lenth method, more outliers can be found by the Carling method. A simulation study is conducted in unreplicated 24 designs with the number of active effects ranging from 1 to 6 to compare the efficiency of the Lenth method, original LGB methods, and the proposed modified version of the LGB method.  相似文献   
34.
Sample selection and attrition are inherent in a range of treatment evaluation problems such as the estimation of the returns to schooling or training. Conventional estimators tackling selection bias typically rely on restrictive functional form assumptions that are unlikely to hold in reality. This paper shows identification of average and quantile treatment effects in the presence of the double selection problem into (i) a selective subpopulation (e.g., working—selection on unobservables) and (ii) a binary treatment (e.g., training—selection on observables) based on weighting observations by the inverse of a nested propensity score that characterizes either selection probability. Weighting estimators based on parametric propensity score models are applied to female labor market data to estimate the returns to education.  相似文献   
35.
Truncation is a known feature of bone marrow transplant (BMT) registry data, for which the survival time of a leukemia patient is left truncated by the waiting time to transplant. It was recently noted that a longer waiting time was linked to poorer survival. A straightforward solution is a Cox model on the survival time with the waiting time as both truncation variable and covariate. The Cox model should also include other recognized risk factors as covariates. In this article, we focus on estimating the distribution function of waiting time and the probability of selection under the aforementioned Cox model.  相似文献   
36.
Abstract

In this paper, we propose a discrete-time risk model with the claim number following an integer-valued autoregressive conditional heteroscedasticity (ARCH) process with Poisson deviates. In this model, the current claim number depends on the previous observations. Within this framework, the equation for finding the adjustment coefficient is derived. Numerical studies are also carried out to examine the impact of the Poisson ARCH dependence structure on the ruin probability.  相似文献   
37.
38.
The ability to work at older ages depends on health and education. Both accumulate starting very early in life. We assess how childhood disadvantages combine with education to affect working and health trajectories. Applying multistate period life tables to data from the Health and Retirement Study (HRS) for the period 2008–2014, we estimate how the residual life expectancy at age 50 is distributed in number of years of work and disability, by number of childhood disadvantages, gender, and race/ethnicity. Our findings indicate that number of childhood disadvantages is negatively associated with work and positively with disability, irrespective of gender and race/ethnicity. Childhood disadvantages intersect with low education resulting in shorter lives, and redistributing life years from work to disability. Among the highly educated, health and work differences between groups of childhood disadvantage are small. Combining multistate models and inverse probability weighting, we show that the return of high education is greater among the most disadvantaged.  相似文献   
39.
In recent years, various types of terrorist attacks occurred, causing worldwide catastrophes. According to the Global Terrorism Database (GTD), among all attack tactics, bombing attacks happened most frequently, followed by armed assaults. In this article, a model for analyzing and forecasting the conditional probability of bombing attacks (CPBAs) based on time‐series methods is developed. In addition, intervention analysis is used to analyze the sudden increase in the time‐series process. The results show that the CPBA increased dramatically at the end of 2011. During that time, the CPBA increased by 16.0% in a two‐month period to reach the peak value, but still stays 9.0% greater than the predicted level after the temporary effect gradually decays. By contrast, no significant fluctuation can be found in the conditional probability process of armed assault. It can be inferred that some social unrest, such as America's troop withdrawal from Afghanistan and Iraq, could have led to the increase of the CPBA in Afghanistan, Iraq, and Pakistan. The integrated time‐series and intervention model is used to forecast the monthly CPBA in 2014 and through 2064. The average relative error compared with the real data in 2014 is 3.5%. The model is also applied to the total number of attacks recorded by the GTD between 2004 and 2014.  相似文献   
40.
In this paper, we investigate four existing and three new confidence interval estimators for the negative binomial proportion (i.e., proportion under inverse/negative binomial sampling). An extensive and systematic comparative study among these confidence interval estimators through Monte Carlo simulations is presented. The performance of these confidence intervals are evaluated in terms of their coverage probabilities and expected interval widths. Our simulation studies suggest that the confidence interval estimator based on saddlepoint approximation is more appealing for large coverage levels (e.g., nominal level≤1% ) whereas the score confidence interval estimator is more desirable for those commonly used coverage levels (e.g., nominal level>1% ). We illustrate these confidence interval construction methods with a real data set from a maternal congenital heart disease study.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号