首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   2690篇
  免费   102篇
  国内免费   34篇
管理学   288篇
民族学   13篇
人口学   53篇
丛书文集   150篇
理论方法论   53篇
综合类   957篇
社会学   133篇
统计学   1179篇
  2024年   2篇
  2023年   16篇
  2022年   34篇
  2021年   39篇
  2020年   49篇
  2019年   82篇
  2018年   104篇
  2017年   112篇
  2016年   101篇
  2015年   90篇
  2014年   133篇
  2013年   463篇
  2012年   223篇
  2011年   156篇
  2010年   140篇
  2009年   111篇
  2008年   117篇
  2007年   139篇
  2006年   107篇
  2005年   100篇
  2004年   98篇
  2003年   94篇
  2002年   76篇
  2001年   50篇
  2000年   45篇
  1999年   30篇
  1998年   23篇
  1997年   23篇
  1996年   8篇
  1995年   8篇
  1994年   8篇
  1993年   6篇
  1992年   12篇
  1991年   5篇
  1990年   4篇
  1989年   5篇
  1988年   1篇
  1987年   1篇
  1986年   2篇
  1985年   2篇
  1984年   3篇
  1983年   1篇
  1982年   2篇
  1977年   1篇
排序方式: 共有2826条查询结果,搜索用时 15 毫秒
51.
Multivariate stochastic volatility models with skew distributions are proposed. Exploiting Cholesky stochastic volatility modeling, univariate stochastic volatility processes with leverage effect and generalized hyperbolic skew t-distributions are embedded to multivariate analysis with time-varying correlations. Bayesian modeling allows this approach to provide parsimonious skew structure and to easily scale up for high-dimensional problem. Analyses of daily stock returns are illustrated. Empirical results show that the time-varying correlations and the sparse skew structure contribute to improved prediction performance and Value-at-Risk forecasts.  相似文献   
52.
在问题类型划分方法的视野下,犯罪概念问题应该属于纯粹刑法学问题中的解释选择问题,但刑法学界以往有关犯罪概念的讨论却大多将其作为刑法问题中的价值判断问题。由于未能妥当确定犯罪概念的问题类型,从而使学界有关犯罪概念的讨论未能达成最低限度的学术共识。作为纯粹刑法学问题中的解释选择问题,形式与实质相结合的混合犯罪概念不存在被替代的必要性,而犯罪概念也不应被规定在刑法典中。  相似文献   
53.
In this paper, we investigate the selecting performances of a bootstrapped version of the Akaike information criterion for nonlinear self-exciting threshold autoregressive-type data generating processes. Empirical results will be obtained via Monte Carlo simulations. The quality of our method is assessed by comparison with its non-bootstrap counterpart and through a novel procedure based on artificial neural networks.  相似文献   
54.
A hierarchical Bayesian approach to ranking and selection as well as estimation of related means in two—way models is considered. Using the method of Monte Carlo simulation with importance sampling, we are able to carry out efficiently the three or four dimensional integrations as needed. An example is included to illustrate the methodology.  相似文献   
55.
I examine whether prenatal sex selection has substituted postnatal excess female mortality by analysing the dynamics of child sex ratios between 1980 and 2015 using country-level life table data. I decompose changes in child sex ratios into a ‘fertility’ component attributable to prenatal sex selection and a ‘mortality’ component attributable to sex differentials in postnatal survival. Although reductions in numbers of excess female deaths have accompanied increases in missing female births in all countries experiencing the emergence of prenatal sex selection, relative excess female mortality has persisted in some countries but not others. In South Korea, Armenia, and Azerbaijan, mortality reductions favouring girls accompanied increases in prenatal sex selection. In India, excess female mortality was much higher and largely stable as prenatal sex selection emerged, but slight reductions were seen in the 2000s. In China, although absolute measures showed reductions, relative excess female mortality persisted as prenatal sex selection increased.  相似文献   
56.
Sample selection and attrition are inherent in a range of treatment evaluation problems such as the estimation of the returns to schooling or training. Conventional estimators tackling selection bias typically rely on restrictive functional form assumptions that are unlikely to hold in reality. This paper shows identification of average and quantile treatment effects in the presence of the double selection problem into (i) a selective subpopulation (e.g., working—selection on unobservables) and (ii) a binary treatment (e.g., training—selection on observables) based on weighting observations by the inverse of a nested propensity score that characterizes either selection probability. Weighting estimators based on parametric propensity score models are applied to female labor market data to estimate the returns to education.  相似文献   
57.
58.
Analysis of massive datasets is challenging owing to limitations of computer primary memory. Composite quantile regression (CQR) is a robust and efficient estimation method. In this paper, we extend CQR to massive datasets and propose a divide-and-conquer CQR method. The basic idea is to split the entire dataset into several blocks, applying the CQR method for data in each block, and finally combining these regression results via weighted average. The proposed approach significantly reduces the required amount of primary memory, and the resulting estimate will be as efficient as if the entire data set is analysed simultaneously. Moreover, to improve the efficiency of CQR, we propose a weighted CQR estimation approach. To achieve sparsity with high-dimensional covariates, we develop a variable selection procedure to select significant parametric components and prove the method possessing the oracle property. Both simulations and data analysis are conducted to illustrate the finite sample performance of the proposed methods.  相似文献   
59.
In this article, to reduce computational load in performing Bayesian variable selection, we used a variant of reversible jump Markov chain Monte Carlo methods, and the Holmes and Held (HH) algorithm, to sample model index variables in logistic mixed models involving a large number of explanatory variables. Furthermore, we proposed a simple proposal distribution for model index variables, and used a simulation study and real example to compare the performance of the HH algorithm with our proposed and existing proposal distributions. The results show that the HH algorithm with our proposed proposal distribution is a computationally efficient and reliable selection method.  相似文献   
60.
For survival endpoints in subgroup selection, a score conversion model is often used to convert the set of biomarkers for each patient into a univariate score and using the median of the univariate scores to divide the patients into biomarker‐positive and biomarker‐negative subgroups. However, this may lead to bias in patient subgroup identification regarding the 2 issues: (1) treatment is equally effective for all patients and/or there is no subgroup difference; (2) the median value of the univariate scores as a cutoff may be inappropriate if the sizes of the 2 subgroups are differ substantially. We utilize a univariate composite score method to convert the set of patient's candidate biomarkers to a univariate response score. We propose applying the likelihood ratio test (LRT) to assess homogeneity of the sampled patients to address the first issue. In the context of identification of the subgroup of responders in adaptive design to demonstrate improvement of treatment efficacy (adaptive power), we suggest that subgroup selection is carried out if the LRT is significant. For the second issue, we utilize a likelihood‐based change‐point algorithm to find an optimal cutoff. Our simulation study shows that type I error generally is controlled, while the overall adaptive power to detect treatment effects sacrifices approximately 4.5% for the simulation designs considered by performing the LRT; furthermore, the change‐point algorithm outperforms the median cutoff considerably when the subgroup sizes differ substantially.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号