首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   2035篇
  免费   62篇
  国内免费   21篇
管理学   230篇
民族学   2篇
人口学   24篇
丛书文集   26篇
理论方法论   89篇
综合类   301篇
社会学   9篇
统计学   1437篇
  2023年   11篇
  2022年   11篇
  2021年   18篇
  2020年   47篇
  2019年   69篇
  2018年   76篇
  2017年   157篇
  2016年   47篇
  2015年   60篇
  2014年   59篇
  2013年   503篇
  2012年   168篇
  2011年   53篇
  2010年   63篇
  2009年   63篇
  2008年   73篇
  2007年   76篇
  2006年   68篇
  2005年   56篇
  2004年   33篇
  2003年   42篇
  2002年   42篇
  2001年   41篇
  2000年   21篇
  1999年   32篇
  1998年   19篇
  1997年   27篇
  1996年   17篇
  1995年   22篇
  1994年   17篇
  1993年   17篇
  1992年   24篇
  1991年   11篇
  1990年   4篇
  1989年   5篇
  1988年   12篇
  1987年   6篇
  1986年   7篇
  1985年   8篇
  1984年   5篇
  1983年   7篇
  1982年   2篇
  1981年   5篇
  1980年   4篇
  1979年   4篇
  1978年   1篇
  1977年   3篇
  1976年   1篇
  1975年   1篇
排序方式: 共有2118条查询结果,搜索用时 421 毫秒
51.
The ability to work at older ages depends on health and education. Both accumulate starting very early in life. We assess how childhood disadvantages combine with education to affect working and health trajectories. Applying multistate period life tables to data from the Health and Retirement Study (HRS) for the period 2008–2014, we estimate how the residual life expectancy at age 50 is distributed in number of years of work and disability, by number of childhood disadvantages, gender, and race/ethnicity. Our findings indicate that number of childhood disadvantages is negatively associated with work and positively with disability, irrespective of gender and race/ethnicity. Childhood disadvantages intersect with low education resulting in shorter lives, and redistributing life years from work to disability. Among the highly educated, health and work differences between groups of childhood disadvantage are small. Combining multistate models and inverse probability weighting, we show that the return of high education is greater among the most disadvantaged.  相似文献   
52.
In recent years, various types of terrorist attacks occurred, causing worldwide catastrophes. According to the Global Terrorism Database (GTD), among all attack tactics, bombing attacks happened most frequently, followed by armed assaults. In this article, a model for analyzing and forecasting the conditional probability of bombing attacks (CPBAs) based on time‐series methods is developed. In addition, intervention analysis is used to analyze the sudden increase in the time‐series process. The results show that the CPBA increased dramatically at the end of 2011. During that time, the CPBA increased by 16.0% in a two‐month period to reach the peak value, but still stays 9.0% greater than the predicted level after the temporary effect gradually decays. By contrast, no significant fluctuation can be found in the conditional probability process of armed assault. It can be inferred that some social unrest, such as America's troop withdrawal from Afghanistan and Iraq, could have led to the increase of the CPBA in Afghanistan, Iraq, and Pakistan. The integrated time‐series and intervention model is used to forecast the monthly CPBA in 2014 and through 2064. The average relative error compared with the real data in 2014 is 3.5%. The model is also applied to the total number of attacks recorded by the GTD between 2004 and 2014.  相似文献   
53.
In this paper, we investigate four existing and three new confidence interval estimators for the negative binomial proportion (i.e., proportion under inverse/negative binomial sampling). An extensive and systematic comparative study among these confidence interval estimators through Monte Carlo simulations is presented. The performance of these confidence intervals are evaluated in terms of their coverage probabilities and expected interval widths. Our simulation studies suggest that the confidence interval estimator based on saddlepoint approximation is more appealing for large coverage levels (e.g., nominal level≤1% ) whereas the score confidence interval estimator is more desirable for those commonly used coverage levels (e.g., nominal level>1% ). We illustrate these confidence interval construction methods with a real data set from a maternal congenital heart disease study.  相似文献   
54.
In the present article we propose the modified lambda family (MLF) which is the Freimer, Mudholkar, Kollia, and Lin (FMKL) parametrization of generalized lambda distribution (GLD) as a model for censored data. The expressions for probability weighted moments of MLF are derived and used to estimate the parameters of the distribution. We modified the estimation technique using probability weighted moments. It is shown that the distribution provides reasonable fit to a real censored data.  相似文献   
55.
The weighted kappa coefficient of a binary diagnostic test is a measure of the beyond-chance agreement between the diagnostic test and the gold standard, and is a measure that allows us to assess and compare the performance of binary diagnostic tests. In the presence of partial disease verification, the comparison of the weighted kappa coefficients of two or more binary diagnostic tests cannot be carried out ignoring the individuals with an unknown disease status, since the estimators obtained would be affected by verification bias. In this article, we propose a global hypothesis test based on the chi-square distribution to simultaneously compare the weighted kappa coefficients when in the presence of partial disease verification the missing data mechanism is ignorable. Simulation experiments have been carried out to study the type I error and the power of the global hypothesis test. The results have been applied to the diagnosis of coronary disease.  相似文献   
56.
For many continuous distributions, a closed-form expression for their quantiles does not exist. Numerical approximations for their quantiles are developed on a distribution-by-distribution basis. This work develops a general approximation for quantiles using the Taylor expansion. Our method only requires that the distribution has a continuous probability density function and its derivatives can be derived to a certain order (usually 3 or 4). We demonstrate our unified approach by approximating the quantiles of the normal, exponential, and chi-square distributions. The approximation works well for these distributions.  相似文献   
57.
In this article, a non-iterative posterior sampling algorithm for linear quantile regression model based on the asymmetric Laplace distribution is proposed. The algorithm combines the inverse Bayes formulae, sampling/importance resampling, and the expectation maximization algorithm to obtain independently and identically distributed samples approximately from the observed posterior distribution, which eliminates the convergence problems in the iterative Gibbs sampling and overcomes the difficulty in evaluating the standard deviance in the EM algorithm. The numeric results in simulations and application to the classical Engel data show that the non-iterative sampling algorithm is more effective than the Gibbs sampling and EM algorithm.  相似文献   
58.
Frailty models are used in the survival analysis to account for the unobserved heterogeneity in the individual risks to disease and death. To analyze the bivariate data on related survival times (e.g., matched pairs experiments, twin or family data), the shared frailty models were suggested. In this article, we introduce the shared gamma frailty models with the reversed hazard rate. We develop the Bayesian estimation procedure using the Markov chain Monte Carlo (MCMC) technique to estimate the parameters involved in the model. We present a simulation study to compare the true values of the parameters with the estimated values. We apply the model to a real life bivariate survival dataset.  相似文献   
59.
In this paper, two control charts based on the generalized linear test (GLT) and contingency table are proposed for Phase-II monitoring of multivariate categorical processes. The performances of the proposed methods are compared with the exponentially weighted moving average-generalized likelihood ratio test (EWMA-GLRT) control chart proposed in the literature. The results show the better performance of the proposed control charts under moderate and large shifts. Moreover, a new scheme is proposed to identify the parameter responsible for an out-of-control signal. The performance of the proposed diagnosing procedure is evaluated through some simulation experiments.  相似文献   
60.
Since multi-attribute control charts have received little attention compared with multivariate variable control charts, this research is concerned with developing a new methodology to employ the multivariate exponentially weighted moving average (MEWMA) charts for m-attribute binomial processes; the attributes being the number of nonconforming items. Moreover, since the variable sample size and sampling interval (VSSI) MEWMA charts detect small process mean shifts faster than the traditional MEWMA, an economic design of the VSSI MEWMA chart is proposed to obtain the optimum design parameters of the chart. The sample size, the sampling interval, and the warning/action limit coefficients are obtained using a genetic algorithm such that the expected total cost per hour is minimized. At the end, a sensitivity analysis has been carried out to investigate the effects of the cost and the model parameters on the solution of the economic design of the VSSI MEWMA chart.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号