全文获取类型
收费全文 | 3901篇 |
免费 | 209篇 |
国内免费 | 42篇 |
专业分类
管理学 | 369篇 |
劳动科学 | 1篇 |
民族学 | 23篇 |
人才学 | 1篇 |
人口学 | 125篇 |
丛书文集 | 200篇 |
理论方法论 | 90篇 |
综合类 | 1670篇 |
社会学 | 206篇 |
统计学 | 1467篇 |
出版年
2024年 | 16篇 |
2023年 | 54篇 |
2022年 | 64篇 |
2021年 | 79篇 |
2020年 | 114篇 |
2019年 | 141篇 |
2018年 | 166篇 |
2017年 | 164篇 |
2016年 | 181篇 |
2015年 | 142篇 |
2014年 | 232篇 |
2013年 | 570篇 |
2012年 | 327篇 |
2011年 | 261篇 |
2010年 | 217篇 |
2009年 | 156篇 |
2008年 | 166篇 |
2007年 | 200篇 |
2006年 | 145篇 |
2005年 | 124篇 |
2004年 | 121篇 |
2003年 | 108篇 |
2002年 | 88篇 |
2001年 | 58篇 |
2000年 | 55篇 |
1999年 | 39篇 |
1998年 | 35篇 |
1997年 | 32篇 |
1996年 | 12篇 |
1995年 | 12篇 |
1994年 | 10篇 |
1993年 | 7篇 |
1992年 | 16篇 |
1991年 | 9篇 |
1990年 | 7篇 |
1989年 | 6篇 |
1988年 | 4篇 |
1987年 | 1篇 |
1986年 | 2篇 |
1985年 | 3篇 |
1984年 | 3篇 |
1983年 | 2篇 |
1982年 | 2篇 |
1977年 | 1篇 |
排序方式: 共有4152条查询结果,搜索用时 31 毫秒
81.
《Journal of Statistical Computation and Simulation》2012,82(2):235-250
In this paper, we investigate the selecting performances of a bootstrapped version of the Akaike information criterion for nonlinear self-exciting threshold autoregressive-type data generating processes. Empirical results will be obtained via Monte Carlo simulations. The quality of our method is assessed by comparison with its non-bootstrap counterpart and through a novel procedure based on artificial neural networks. 相似文献
82.
依据对长江三角洲次级和县级区域资源配置现状的实证研究发现:一是长江三角洲区域在经历高速重化工业化的同时,开始跨入了内生增长模式的门槛,人力资本的贡献日益显现;二是越来越多的次级区域工业生产能力和技术创新能力正在超越北京、上海以及省会城市,由此国家的科技资源和高等教育资源也应当流向这些区域,使产业、科技、教育资源能够融合在同一个空间,以大大提高整个国家技术创新投入的收益;三是转变经济发展方式和实施创新型国家建设战略的真正落脚点是支持和推动更多的次级和县级区域建立高等教育体系,创造更多的人力资本,开展更多的技术创新活动,使更多的区域采取基于技术进步的增长战略。充足的证据表明,现在到了需要采取一个重视创新资源效率的区域增长战略的时候了。 相似文献
83.
《Journal of Statistical Computation and Simulation》2012,82(2-3):107-117
A hierarchical Bayesian approach to ranking and selection as well as estimation of related means in two—way models is considered. Using the method of Monte Carlo simulation with importance sampling, we are able to carry out efficiently the three or four dimensional integrations as needed. An example is included to illustrate the methodology. 相似文献
84.
Ridhi Kashyap 《Population studies》2019,73(1):57-78
I examine whether prenatal sex selection has substituted postnatal excess female mortality by analysing the dynamics of child sex ratios between 1980 and 2015 using country-level life table data. I decompose changes in child sex ratios into a ‘fertility’ component attributable to prenatal sex selection and a ‘mortality’ component attributable to sex differentials in postnatal survival. Although reductions in numbers of excess female deaths have accompanied increases in missing female births in all countries experiencing the emergence of prenatal sex selection, relative excess female mortality has persisted in some countries but not others. In South Korea, Armenia, and Azerbaijan, mortality reductions favouring girls accompanied increases in prenatal sex selection. In India, excess female mortality was much higher and largely stable as prenatal sex selection emerged, but slight reductions were seen in the 2000s. In China, although absolute measures showed reductions, relative excess female mortality persisted as prenatal sex selection increased. 相似文献
85.
Martin Huber 《Econometric Reviews》2014,33(8):869-905
Sample selection and attrition are inherent in a range of treatment evaluation problems such as the estimation of the returns to schooling or training. Conventional estimators tackling selection bias typically rely on restrictive functional form assumptions that are unlikely to hold in reality. This paper shows identification of average and quantile treatment effects in the presence of the double selection problem into (i) a selective subpopulation (e.g., working—selection on unobservables) and (ii) a binary treatment (e.g., training—selection on observables) based on weighting observations by the inverse of a nested propensity score that characterizes either selection probability. Weighting estimators based on parametric propensity score models are applied to female labor market data to estimate the returns to education. 相似文献
86.
87.
Analysis of massive datasets is challenging owing to limitations of computer primary memory. Composite quantile regression (CQR) is a robust and efficient estimation method. In this paper, we extend CQR to massive datasets and propose a divide-and-conquer CQR method. The basic idea is to split the entire dataset into several blocks, applying the CQR method for data in each block, and finally combining these regression results via weighted average. The proposed approach significantly reduces the required amount of primary memory, and the resulting estimate will be as efficient as if the entire data set is analysed simultaneously. Moreover, to improve the efficiency of CQR, we propose a weighted CQR estimation approach. To achieve sparsity with high-dimensional covariates, we develop a variable selection procedure to select significant parametric components and prove the method possessing the oracle property. Both simulations and data analysis are conducted to illustrate the finite sample performance of the proposed methods. 相似文献
88.
《Scandinavian Journal of Statistics》2018,45(3):792-805
When sampling from a continuous population (or distribution), we often want a rather small sample due to some cost attached to processing the sample or to collecting information in the field. Moreover, a probability sample that allows for design‐based statistical inference is often desired. Given these requirements, we want to reduce the sampling variance of the Horvitz–Thompson estimator as much as possible. To achieve this, we introduce different approaches to using the local pivotal method for selecting well‐spread samples from multidimensional continuous populations. The results of a simulation study clearly indicate that we succeed in selecting spatially balanced samples and improve the efficiency of the Horvitz–Thompson estimator. 相似文献
89.
In this article, to reduce computational load in performing Bayesian variable selection, we used a variant of reversible jump Markov chain Monte Carlo methods, and the Holmes and Held (HH) algorithm, to sample model index variables in logistic mixed models involving a large number of explanatory variables. Furthermore, we proposed a simple proposal distribution for model index variables, and used a simulation study and real example to compare the performance of the HH algorithm with our proposed and existing proposal distributions. The results show that the HH algorithm with our proposed proposal distribution is a computationally efficient and reliable selection method. 相似文献
90.
Development of predictive signatures for treatment selection in precision medicine with survival outcomes
下载免费PDF全文
![点击此处可从《Pharmaceutical statistics》网站下载免费的PDF全文](/ch/ext_images/free.gif)
For survival endpoints in subgroup selection, a score conversion model is often used to convert the set of biomarkers for each patient into a univariate score and using the median of the univariate scores to divide the patients into biomarker‐positive and biomarker‐negative subgroups. However, this may lead to bias in patient subgroup identification regarding the 2 issues: (1) treatment is equally effective for all patients and/or there is no subgroup difference; (2) the median value of the univariate scores as a cutoff may be inappropriate if the sizes of the 2 subgroups are differ substantially. We utilize a univariate composite score method to convert the set of patient's candidate biomarkers to a univariate response score. We propose applying the likelihood ratio test (LRT) to assess homogeneity of the sampled patients to address the first issue. In the context of identification of the subgroup of responders in adaptive design to demonstrate improvement of treatment efficacy (adaptive power), we suggest that subgroup selection is carried out if the LRT is significant. For the second issue, we utilize a likelihood‐based change‐point algorithm to find an optimal cutoff. Our simulation study shows that type I error generally is controlled, while the overall adaptive power to detect treatment effects sacrifices approximately 4.5% for the simulation designs considered by performing the LRT; furthermore, the change‐point algorithm outperforms the median cutoff considerably when the subgroup sizes differ substantially. 相似文献