全文获取类型
收费全文 | 1412篇 |
免费 | 46篇 |
国内免费 | 18篇 |
专业分类
管理学 | 193篇 |
民族学 | 1篇 |
人口学 | 22篇 |
丛书文集 | 20篇 |
理论方法论 | 76篇 |
综合类 | 220篇 |
社会学 | 9篇 |
统计学 | 935篇 |
出版年
2023年 | 6篇 |
2022年 | 8篇 |
2021年 | 10篇 |
2020年 | 32篇 |
2019年 | 46篇 |
2018年 | 48篇 |
2017年 | 87篇 |
2016年 | 37篇 |
2015年 | 44篇 |
2014年 | 39篇 |
2013年 | 354篇 |
2012年 | 115篇 |
2011年 | 37篇 |
2010年 | 45篇 |
2009年 | 42篇 |
2008年 | 54篇 |
2007年 | 57篇 |
2006年 | 55篇 |
2005年 | 37篇 |
2004年 | 28篇 |
2003年 | 32篇 |
2002年 | 35篇 |
2001年 | 27篇 |
2000年 | 18篇 |
1999年 | 20篇 |
1998年 | 12篇 |
1997年 | 14篇 |
1996年 | 12篇 |
1995年 | 12篇 |
1994年 | 14篇 |
1993年 | 13篇 |
1992年 | 18篇 |
1991年 | 10篇 |
1990年 | 4篇 |
1989年 | 5篇 |
1988年 | 10篇 |
1987年 | 6篇 |
1986年 | 5篇 |
1985年 | 6篇 |
1984年 | 5篇 |
1983年 | 3篇 |
1982年 | 2篇 |
1981年 | 4篇 |
1980年 | 3篇 |
1979年 | 1篇 |
1978年 | 1篇 |
1977年 | 2篇 |
1975年 | 1篇 |
排序方式: 共有1476条查询结果,搜索用时 15 毫秒
1.
Stephen J. Ruberg Frank E. Harrell Jr. Margaret Gamalo-Siebers Lisa LaVange J. Jack Lee Karen Price 《The American statistician》2019,73(1):319-327
ABSTRACTThe cost and time of pharmaceutical drug development continue to grow at rates that many say are unsustainable. These trends have enormous impact on what treatments get to patients, when they get them and how they are used. The statistical framework for supporting decisions in regulated clinical development of new medicines has followed a traditional path of frequentist methodology. Trials using hypothesis tests of “no treatment effect” are done routinely, and the p-value < 0.05 is often the determinant of what constitutes a “successful” trial. Many drugs fail in clinical development, adding to the cost of new medicines, and some evidence points blame at the deficiencies of the frequentist paradigm. An unknown number effective medicines may have been abandoned because trials were declared “unsuccessful” due to a p-value exceeding 0.05. Recently, the Bayesian paradigm has shown utility in the clinical drug development process for its probability-based inference. We argue for a Bayesian approach that employs data from other trials as a “prior” for Phase 3 trials so that synthesized evidence across trials can be utilized to compute probability statements that are valuable for understanding the magnitude of treatment effect. Such a Bayesian paradigm provides a promising framework for improving statistical inference and regulatory decision making. 相似文献
2.
David R. Bickel 《统计学通讯:理论与方法》2020,49(11):2703-2712
AbstractConfidence sets, p values, maximum likelihood estimates, and other results of non-Bayesian statistical methods may be adjusted to favor sampling distributions that are simple compared to others in the parametric family. The adjustments are derived from a prior likelihood function previously used to adjust posterior distributions. 相似文献
3.
Keisuke Himoto 《Risk analysis》2020,40(6):1124-1138
Post-earthquake fires are high-consequence events with extensive damage potential. They are also low-frequency events, so their nature remains underinvestigated. One difficulty in modeling post-earthquake ignition probabilities is reducing the model uncertainty attributed to the scarce source data. The data scarcity problem has been resolved by pooling the data indiscriminately collected from multiple earthquakes. However, this approach neglects the inter-earthquake heterogeneity in the regional and seasonal characteristics, which is indispensable for risk assessment of future post-earthquake fires. Thus, the present study analyzes the post-earthquake ignition probabilities of five major earthquakes in Japan from 1995 to 2016 (1995 Kobe, 2003 Tokachi-oki, 2004 Niigata–Chuetsu, 2011 Tohoku, and 2016 Kumamoto earthquakes) by a hierarchical Bayesian approach. As the ignition causes of earthquakes share a certain commonality, common prior distributions were assigned to the parameters, and samples were drawn from the target posterior distribution of the parameters by a Markov chain Monte Carlo simulation. The results of the hierarchical model were comparatively analyzed with those of pooled and independent models. Although the pooled and hierarchical models were both robust in comparison with the independent model, the pooled model underestimated the ignition probabilities of earthquakes with few data samples. Among the tested models, the hierarchical model was least affected by the source-to-source variability in the data. The heterogeneity of post-earthquake ignitions with different regional and seasonal characteristics has long been desired in the modeling of post-earthquake ignition probabilities but has not been properly considered in the existing approaches. The presented hierarchical Bayesian approach provides a systematic and rational framework to effectively cope with this problem, which consequently enhances the statistical reliability and stability of estimating post-earthquake ignition probabilities. 相似文献
4.
Owing to the extreme quantiles involved, standard control charts are very sensitive to the effects of parameter estimation and non-normality. More general parametric charts have been devised to deal with the latter complication and corrections have been derived to compensate for the estimation step, both under normal and parametric models. The resulting procedures offer a satisfactory solution over a broad range of underlying distributions. However, situations do occur where even such a large model is inadequate and nothing remains but to consider non- parametric charts. In principle, these form ideal solutions, but the problem is that huge sample sizes are required for the estimation step. Otherwise the resulting stochastic error is so large that the chart is very unstable, a disadvantage that seems to outweigh the advantage of avoiding the model error from the parametric case. Here we analyse under what conditions non-parametric charts actually become feasible alternatives for their parametric counterparts. In particular, corrected versions are suggested for which a possible change point is reached at sample sizes that are markedly less huge (but still larger than the customary range). These corrections serve to control the behaviour during in-control (markedly wrong outcomes of the estimates only occur sufficiently rarely). The price for this protection will clearly be some loss of detection power during out-of-control. A change point comes in view as soon as this loss can be made sufficiently small. 相似文献
5.
Summary Weak disintegrations are investigated from various points of view. Kolmogorov's definition of conditional probability is critically
analysed, and it is noted how the notion of disintegrability plays some role in connecting Kolmogorov's definition with the
one given in line with de Finetti's coherence principle. Conditions are given, on the domain of a prevision, implying the
equivalence between weak disintegrability and conglomerability. Moreover, weak sintegrations are characterized in terms of
coherence, in de Finetti's sense, of, a suitable function. This fact enables us to give, an interpretation of weak disintegrability
as a form of “preservation of coherence”. The previous results are also applied to a hypothetical inferential problem. In
particular, an inference is shown to be coherent, in the sense of Heath and Sudderth, if and only if a suitable function is
coherent, in de Finetti's sense.
Research partially supported by: M.U.R.S.T. 40% “Problemi di inferenza pura”. 相似文献
6.
Assessing accuracy of a continuous screening test in the presence of verification bias 总被引:1,自引:1,他引:0
Todd A. Alonzo Margaret Sullivan Pepe 《Journal of the Royal Statistical Society. Series C, Applied statistics》2005,54(1):173-190
Summary. In studies to assess the accuracy of a screening test, often definitive disease assessment is too invasive or expensive to be ascertained on all the study subjects. Although it may be more ethical or cost effective to ascertain the true disease status with a higher rate in study subjects where the screening test or additional information is suggestive of disease, estimates of accuracy can be biased in a study with such a design. This bias is known as verification bias. Verification bias correction methods that accommodate screening tests with binary or ordinal responses have been developed; however, no verification bias correction methods exist for tests with continuous results. We propose and compare imputation and reweighting bias-corrected estimators of true and false positive rates, receiver operating characteristic curves and area under the receiver operating characteristic curve for continuous tests. Distribution theory and simulation studies are used to compare the proposed estimators with respect to bias, relative efficiency and robustness to model misspecification. The bias correction estimators proposed are applied to data from a study of screening tests for neonatal hearing loss. 相似文献
7.
Marco Bee 《Statistical Methods and Applications》2005,14(1):127-141
In this article we provide a rigorous treatment of one of the central statistical issues of credit risk management. GivenK-1 rating categories, the rating of a corporate bond over a certain horizon may either stay the same or change to one of the
remainingK-2 categories; in addition, it is usually the case that the rating of some bonds is withdrawn during the time interval considered
in the analysis. When estimating transition probabilities, we have thus to consider aK-th category, called withdrawal, which contains (partially) missing data. We show how maximum likelihood estimation can be
performed in this setup; whereas in discrete time our solution gives rigorous support to a solution often used in applications,
in continuous time the maximum likelihood estimator of the transition matrix computed by means of the EM algorithm represents
a significant improvement over existing methods. 相似文献
8.
Kepher Henry Makambi 《Statistical Methods and Applications》2002,11(1):127-138
The standard hypothesis testing procedure in meta-analysis (or multi-center clinical trials) in the absence of treatment-by-center
interaction relies on approximating the null distribution of the standard test statistic by a standard normal distribution.
For relatively small sample sizes, the standard procedure has been shown by various authors to have poor control of the type
I error probability, leading to too many liberal decisions. In this article, two test procedures are proposed, which rely
on thet—distribution as the reference distribution. A simulation study indicates that the proposed procedures attain significance
levels closer to the nominal level compared with the standard procedure. 相似文献
9.
研究了在动态业务量时,可调谐光收发器和波长变换器对生存性WDM网络性能的影响。通过在NSFNET、CERNET、MESH-TORUS网络中的仿真,又从网络阻塞率方面研究了两者对WDM网络性能的影响。研究结果表明,在业务量为动态的生存性WDM网络中,可调谐光收发器和波长变换器对网络的性能都有显著的提高,但波长变换器对网络性能的改善要大于可调谐光收发器。 相似文献
10.
应用结构模式识别的方法建立多阶均线形态组合序列通过对股市历史数据的变换和分析考查各形态组合序列对应的盈利和亏损概率建立新的证券投资分析方法即均线形态组合预测法其优点是可以部分弥补传统技术分析中的滞后性和不确定性能根据市场的变化及时调整相应的参数指标有效提高预测的成功率并易用计算机作出识别 相似文献