全文获取类型
收费全文 | 5830篇 |
免费 | 224篇 |
国内免费 | 21篇 |
专业分类
管理学 | 628篇 |
民族学 | 9篇 |
人口学 | 88篇 |
丛书文集 | 80篇 |
理论方法论 | 138篇 |
综合类 | 515篇 |
社会学 | 277篇 |
统计学 | 4340篇 |
出版年
2023年 | 62篇 |
2022年 | 46篇 |
2021年 | 79篇 |
2020年 | 103篇 |
2019年 | 193篇 |
2018年 | 254篇 |
2017年 | 406篇 |
2016年 | 191篇 |
2015年 | 191篇 |
2014年 | 229篇 |
2013年 | 1307篇 |
2012年 | 444篇 |
2011年 | 228篇 |
2010年 | 190篇 |
2009年 | 242篇 |
2008年 | 215篇 |
2007年 | 214篇 |
2006年 | 182篇 |
2005年 | 194篇 |
2004年 | 167篇 |
2003年 | 132篇 |
2002年 | 106篇 |
2001年 | 99篇 |
2000年 | 91篇 |
1999年 | 81篇 |
1998年 | 75篇 |
1997年 | 60篇 |
1996年 | 29篇 |
1995年 | 25篇 |
1994年 | 41篇 |
1993年 | 24篇 |
1992年 | 28篇 |
1991年 | 24篇 |
1990年 | 14篇 |
1989年 | 15篇 |
1988年 | 19篇 |
1987年 | 10篇 |
1986年 | 6篇 |
1985年 | 12篇 |
1984年 | 8篇 |
1983年 | 12篇 |
1982年 | 13篇 |
1981年 | 2篇 |
1980年 | 4篇 |
1979年 | 3篇 |
1978年 | 2篇 |
1977年 | 1篇 |
1976年 | 1篇 |
1975年 | 1篇 |
排序方式: 共有6075条查询结果,搜索用时 15 毫秒
1.
This article considers statistical inference for the heteroscedastic partially linear varying coefficient models. We construct an efficient estimator for the parametric component by applying the weighted profile least-squares approach, and show that it is semiparametrically efficient in the sense that the inverse of the asymptotic variance of the estimator reaches the semiparametric efficiency bound. Simulation studies are conducted to illustrate the performance of the proposed method. 相似文献
2.
Stephen J. Ruberg Frank E. Harrell Jr. Margaret Gamalo-Siebers Lisa LaVange J. Jack Lee Karen Price 《The American statistician》2019,73(1):319-327
ABSTRACTThe cost and time of pharmaceutical drug development continue to grow at rates that many say are unsustainable. These trends have enormous impact on what treatments get to patients, when they get them and how they are used. The statistical framework for supporting decisions in regulated clinical development of new medicines has followed a traditional path of frequentist methodology. Trials using hypothesis tests of “no treatment effect” are done routinely, and the p-value < 0.05 is often the determinant of what constitutes a “successful” trial. Many drugs fail in clinical development, adding to the cost of new medicines, and some evidence points blame at the deficiencies of the frequentist paradigm. An unknown number effective medicines may have been abandoned because trials were declared “unsuccessful” due to a p-value exceeding 0.05. Recently, the Bayesian paradigm has shown utility in the clinical drug development process for its probability-based inference. We argue for a Bayesian approach that employs data from other trials as a “prior” for Phase 3 trials so that synthesized evidence across trials can be utilized to compute probability statements that are valuable for understanding the magnitude of treatment effect. Such a Bayesian paradigm provides a promising framework for improving statistical inference and regulatory decision making. 相似文献
3.
4.
Ye Weiping 《Social Sciences in China》2018,39(1):34-49
Modern analytical models for anti-monopoly laws are a core element of the application of those laws. Since the Anti-Monopoly Law of the People’s Republic of China was promulgated in 2008, law enforcement and judicial authorities have applied different analytical models, leading to divergent legal and regulatory outcomes as similar cases receive different verdicts. To select a suitable analytical model for China’s Anti-Monopoly Law, we need to consider the possible contribution of both economic analysis and legal formalism and to learn from the mature systems and experience of foreign countries. It is also necessary to take into account such binding constraints as the current composition of China’s anti-monopoly legal system, the ability of implementing agencies and the supply of economic analysis, in order to ensure complementarity between the analytical model chosen and the complexity of economic analysis and between the professionalism of implementing agencies and the cost of compliance for participants in economic activities. In terms of institutional design, the models should provide a considered explanation of the legislative aims of the law’s provisions. It is necessary, therefore, to establish a processing model of behavioral classification that is based on China’s national conditions, applies analytical models using normative comprehensive analysis, makes use of the distribution rule of burden of proof, improves supporting systems related to analytical models and enhances the ability of public authorities to implement the law. 相似文献
5.
《Risk analysis》2018,38(9):1988-2009
Harbor seals in Iliamna Lake, Alaska, are a small, isolated population, and one of only two freshwater populations of harbor seals in the world, yet little is known about their abundance or risk for extinction. Bayesian hierarchical models were used to estimate abundance and trend of this population. Observational models were developed from aerial survey and harvest data, and they included effects for time of year and time of day on survey counts. Underlying models of abundance and trend were based on a Leslie matrix model that used prior information on vital rates from the literature. We developed three scenarios for variability in the priors and used them as part of a sensitivity analysis. The models were fitted using Markov chain Monte Carlo methods. The population production rate implied by the vital rate estimates was about 5% per year, very similar to the average annual harvest rate. After a period of growth in the 1980s, the population appears to be relatively stable at around 400 individuals. A population viability analysis assessing the risk of quasi‐extinction, defined as any reduction to 50 animals or below in the next 100 years, ranged from 1% to 3%, depending on the prior scenario. Although this is moderately low risk, it does not include genetic or catastrophic environmental events, which may have occurred to the population in the past, so our results should be applied cautiously. 相似文献
6.
A. M. Abd El-Raheem 《Journal of Statistical Computation and Simulation》2019,89(16):3075-3104
The generalized half-normal (GHN) distribution and progressive type-II censoring are considered in this article for studying some statistical inferences of constant-stress accelerated life testing. The EM algorithm is considered to calculate the maximum likelihood estimates. Fisher information matrix is formed depending on the missing information law and it is utilized for structuring the asymptomatic confidence intervals. Further, interval estimation is discussed through bootstrap intervals. The Tierney and Kadane method, importance sampling procedure and Metropolis-Hastings algorithm are utilized to compute Bayesian estimates. Furthermore, predictive estimates for censored data and the related prediction intervals are obtained. We consider three optimality criteria to find out the optimal stress level. A real data set is used to illustrate the importance of GHN distribution as an alternative lifetime model for well-known distributions. Finally, a simulation study is provided with discussion. 相似文献
7.
8.
Hadi Emami 《统计学通讯:理论与方法》2020,49(8):1793-1800
AbstractIn this article we develop the minimax estimation approach of general linear models to the semiparametric linear models when the parameters are simultaneously constrained by an ellipsoid and linear restrictions. Combining sample information and prior constraints the minimax estimator is obtained and compared with partially least square estimator by theoretical and simulation methods. 相似文献
9.
Keisuke Himoto 《Risk analysis》2020,40(6):1124-1138
Post-earthquake fires are high-consequence events with extensive damage potential. They are also low-frequency events, so their nature remains underinvestigated. One difficulty in modeling post-earthquake ignition probabilities is reducing the model uncertainty attributed to the scarce source data. The data scarcity problem has been resolved by pooling the data indiscriminately collected from multiple earthquakes. However, this approach neglects the inter-earthquake heterogeneity in the regional and seasonal characteristics, which is indispensable for risk assessment of future post-earthquake fires. Thus, the present study analyzes the post-earthquake ignition probabilities of five major earthquakes in Japan from 1995 to 2016 (1995 Kobe, 2003 Tokachi-oki, 2004 Niigata–Chuetsu, 2011 Tohoku, and 2016 Kumamoto earthquakes) by a hierarchical Bayesian approach. As the ignition causes of earthquakes share a certain commonality, common prior distributions were assigned to the parameters, and samples were drawn from the target posterior distribution of the parameters by a Markov chain Monte Carlo simulation. The results of the hierarchical model were comparatively analyzed with those of pooled and independent models. Although the pooled and hierarchical models were both robust in comparison with the independent model, the pooled model underestimated the ignition probabilities of earthquakes with few data samples. Among the tested models, the hierarchical model was least affected by the source-to-source variability in the data. The heterogeneity of post-earthquake ignitions with different regional and seasonal characteristics has long been desired in the modeling of post-earthquake ignition probabilities but has not been properly considered in the existing approaches. The presented hierarchical Bayesian approach provides a systematic and rational framework to effectively cope with this problem, which consequently enhances the statistical reliability and stability of estimating post-earthquake ignition probabilities. 相似文献
10.
Multiparameter evidence synthesis in epidemiology and medical decision-making: current approaches 总被引:1,自引:0,他引:1
A. E. Ades A. J. Sutton 《Journal of the Royal Statistical Society. Series A, (Statistics in Society)》2006,169(1):5-35
Summary. Alongside the development of meta-analysis as a tool for summarizing research literature, there is renewed interest in broader forms of quantitative synthesis that are aimed at combining evidence from different study designs or evidence on multiple parameters. These have been proposed under various headings: the confidence profile method, cross-design synthesis, hierarchical models and generalized evidence synthesis. Models that are used in health technology assessment are also referred to as representing a synthesis of evidence in a mathematical structure. Here we review alternative approaches to statistical evidence synthesis, and their implications for epidemiology and medical decision-making. The methods include hierarchical models, models informed by evidence on different functions of several parameters and models incorporating both of these features. The need to check for consistency of evidence when using these powerful methods is emphasized. We develop a rationale for evidence synthesis that is based on Bayesian decision modelling and expected value of information theory, which stresses not only the need for a lack of bias in estimates of treatment effects but also a lack of bias in assessments of uncertainty. The increasing reliance of governmental bodies like the UK National Institute for Clinical Excellence on complex evidence synthesis in decision modelling is discussed. 相似文献