全文获取类型
收费全文 | 2315篇 |
免费 | 65篇 |
国内免费 | 26篇 |
专业分类
管理学 | 304篇 |
民族学 | 2篇 |
人口学 | 26篇 |
丛书文集 | 25篇 |
理论方法论 | 76篇 |
综合类 | 355篇 |
社会学 | 12篇 |
统计学 | 1606篇 |
出版年
2023年 | 7篇 |
2022年 | 11篇 |
2021年 | 17篇 |
2020年 | 43篇 |
2019年 | 73篇 |
2018年 | 71篇 |
2017年 | 138篇 |
2016年 | 56篇 |
2015年 | 67篇 |
2014年 | 74篇 |
2013年 | 607篇 |
2012年 | 179篇 |
2011年 | 62篇 |
2010年 | 67篇 |
2009年 | 65篇 |
2008年 | 85篇 |
2007年 | 73篇 |
2006年 | 74篇 |
2005年 | 69篇 |
2004年 | 53篇 |
2003年 | 48篇 |
2002年 | 61篇 |
2001年 | 48篇 |
2000年 | 31篇 |
1999年 | 44篇 |
1998年 | 32篇 |
1997年 | 33篇 |
1996年 | 21篇 |
1995年 | 19篇 |
1994年 | 20篇 |
1993年 | 24篇 |
1992年 | 33篇 |
1991年 | 17篇 |
1990年 | 6篇 |
1989年 | 7篇 |
1988年 | 15篇 |
1987年 | 10篇 |
1986年 | 6篇 |
1985年 | 7篇 |
1984年 | 7篇 |
1983年 | 6篇 |
1982年 | 3篇 |
1981年 | 9篇 |
1980年 | 3篇 |
1979年 | 1篇 |
1978年 | 1篇 |
1977年 | 2篇 |
1975年 | 1篇 |
排序方式: 共有2406条查询结果,搜索用时 15 毫秒
1.
Stephen J. Ruberg Frank E. Harrell Jr. Margaret Gamalo-Siebers Lisa LaVange J. Jack Lee Karen Price 《The American statistician》2019,73(1):319-327
ABSTRACTThe cost and time of pharmaceutical drug development continue to grow at rates that many say are unsustainable. These trends have enormous impact on what treatments get to patients, when they get them and how they are used. The statistical framework for supporting decisions in regulated clinical development of new medicines has followed a traditional path of frequentist methodology. Trials using hypothesis tests of “no treatment effect” are done routinely, and the p-value < 0.05 is often the determinant of what constitutes a “successful” trial. Many drugs fail in clinical development, adding to the cost of new medicines, and some evidence points blame at the deficiencies of the frequentist paradigm. An unknown number effective medicines may have been abandoned because trials were declared “unsuccessful” due to a p-value exceeding 0.05. Recently, the Bayesian paradigm has shown utility in the clinical drug development process for its probability-based inference. We argue for a Bayesian approach that employs data from other trials as a “prior” for Phase 3 trials so that synthesized evidence across trials can be utilized to compute probability statements that are valuable for understanding the magnitude of treatment effect. Such a Bayesian paradigm provides a promising framework for improving statistical inference and regulatory decision making. 相似文献
2.
针对我国政府、企业和银行等金融机构共同关注的债转股问题,基于债务协商谈判思想,建立部分债务股权互换模型,计算公司证券价格,探讨了债转股对公司价值、破产概率、破产损失成本和资本结构的影响,给出了银行等债权人愿意债转股的充分条件。结果表明:在事先破产清算协议贷款下,事后全部债转股总能提高公司股权价值,但并不一定能提高债券价值。只有其协商谈判能力满足一定条件,公司债权人才愿意事后选择债转股,实现帕累托改进、提高社会福利水平。其次,在公司股东协商谈判能力的一定范围内,部分债转股能提高公司价值,其最优转股债息比例随着公司资产风险的增大而增加。再次,债转股能降低公司破产风险和破产损失成本,但同时也提高了债券风险溢价。最后,随着股东谈判能力增强,最优协商转股债务比例、杠杆率都减少,而债券风险溢价增大。本文所得结果对我国政府、企业和银行如何实施债转股提供理论参考和实践指导。 相似文献
3.
David R. Bickel 《统计学通讯:理论与方法》2020,49(11):2703-2712
AbstractConfidence sets, p values, maximum likelihood estimates, and other results of non-Bayesian statistical methods may be adjusted to favor sampling distributions that are simple compared to others in the parametric family. The adjustments are derived from a prior likelihood function previously used to adjust posterior distributions. 相似文献
4.
We study the asymptotic behavior of the marginal expected shortfall when the two random variables are asymptotic independent but positively associated, which is modeled by the so-called tail dependent coefficient. We construct an estimator of the marginal expected shortfall, which is shown to be asymptotically normal. The finite sample performance of the estimator is investigated in a small simulation study. The method is also applied to estimate the expected amount of rainfall at a weather station given that there is a once every 100 years rainfall at another weather station nearby. 相似文献
5.
Keisuke Himoto 《Risk analysis》2020,40(6):1124-1138
Post-earthquake fires are high-consequence events with extensive damage potential. They are also low-frequency events, so their nature remains underinvestigated. One difficulty in modeling post-earthquake ignition probabilities is reducing the model uncertainty attributed to the scarce source data. The data scarcity problem has been resolved by pooling the data indiscriminately collected from multiple earthquakes. However, this approach neglects the inter-earthquake heterogeneity in the regional and seasonal characteristics, which is indispensable for risk assessment of future post-earthquake fires. Thus, the present study analyzes the post-earthquake ignition probabilities of five major earthquakes in Japan from 1995 to 2016 (1995 Kobe, 2003 Tokachi-oki, 2004 Niigata–Chuetsu, 2011 Tohoku, and 2016 Kumamoto earthquakes) by a hierarchical Bayesian approach. As the ignition causes of earthquakes share a certain commonality, common prior distributions were assigned to the parameters, and samples were drawn from the target posterior distribution of the parameters by a Markov chain Monte Carlo simulation. The results of the hierarchical model were comparatively analyzed with those of pooled and independent models. Although the pooled and hierarchical models were both robust in comparison with the independent model, the pooled model underestimated the ignition probabilities of earthquakes with few data samples. Among the tested models, the hierarchical model was least affected by the source-to-source variability in the data. The heterogeneity of post-earthquake ignitions with different regional and seasonal characteristics has long been desired in the modeling of post-earthquake ignition probabilities but has not been properly considered in the existing approaches. The presented hierarchical Bayesian approach provides a systematic and rational framework to effectively cope with this problem, which consequently enhances the statistical reliability and stability of estimating post-earthquake ignition probabilities. 相似文献
6.
Laurent Gardes Stéphane Girard Gilles Stupfler 《Scandinavian Journal of Statistics》2020,47(3):922-949
The conditional tail expectation (CTE) is an indicator of tail behavior that takes into account both the frequency and magnitude of a tail event. However, the asymptotic normality of its empirical estimator requires that the underlying distribution possess a finite variance; this can be a strong restriction in actuarial and financial applications. A valuable alternative is the median shortfall (MS), although it only gives information about the frequency of a tail event. We construct a class of tail Lp-medians encompassing the MS and CTE. For p in (1,2), a tail Lp-median depends on both the frequency and magnitude of tail events, and its empirical estimator is, within the range of the data, asymptotically normal under a condition weaker than a finite variance. We extrapolate this estimator and another technique to extreme levels using the heavy-tailed framework. The estimators are showcased on a simulation study and on real fire insurance data. 相似文献
7.
OLIVIER CAPPÉ RANDAL DOUC ERIC MOULINES & CHRISTIAN ROBERT 《Scandinavian Journal of Statistics》2002,29(4):615-635
While much used in practice, latent variable models raise challenging estimation problems due to the intractability of their likelihood. Monte Carlo maximum likelihood (MCML), as proposed by Geyer & Thompson (1992 ), is a simulation-based approach to maximum likelihood approximation applicable to general latent variable models. MCML can be described as an importance sampling method in which the likelihood ratio is approximated by Monte Carlo averages of importance ratios simulated from the complete data model corresponding to an arbitrary value of the unknown parameter. This paper studies the asymptotic (in the number of observations) performance of the MCML method in the case of latent variable models with independent observations. This is in contrast with previous works on the same topic which only considered conditional convergence to the maximum likelihood estimator, for a fixed set of observations. A first important result is that when is fixed, the MCML method can only be consistent if the number of simulations grows exponentially fast with the number of observations. If on the other hand, is obtained from a consistent sequence of estimates of the unknown parameter, then the requirements on the number of simulations are shown to be much weaker. 相似文献
8.
Philippe Huber Elvezio Ronchetti Maria-Pia Victoria-Feser 《Journal of the Royal Statistical Society. Series B, Statistical methodology》2004,66(4):893-908
Summary. Generalized linear latent variable models (GLLVMs), as defined by Bartholomew and Knott, enable modelling of relationships between manifest and latent variables. They extend structural equation modelling techniques, which are powerful tools in the social sciences. However, because of the complexity of the log-likelihood function of a GLLVM, an approximation such as numerical integration must be used for inference. This can limit drastically the number of variables in the model and can lead to biased estimators. We propose a new estimator for the parameters of a GLLVM, based on a Laplace approximation to the likelihood function and which can be computed even for models with a large number of variables. The new estimator can be viewed as an M -estimator, leading to readily available asymptotic properties and correct inference. A simulation study shows its excellent finite sample properties, in particular when compared with a well-established approach such as LISREL. A real data example on the measurement of wealth for the computation of multidimensional inequality is analysed to highlight the importance of the methodology. 相似文献
9.
Summary:
The H–family of distributions or H–distributions, introduced by Tukey (1960; 1977), are
generated by a single transformation of the standard normal distribution and allow for leptokurtosis
represented by the parameter h. Alternatively, Haynes et al. (1997) generated leptokurtic distributions
by applying the K–transformation to the normal distribution. In this study we propose a third transformation,
the so–called J–transformation, and derive some properties of this transformation. Moreover,
so-called elongation generating functions (EGFs) are introduced. By means of EGFs we are able to
visualize the strength of tail elongation and to construct new transformations. Finally, we compare the
three transformations towards their goodness–of–fit in the context of financial return data. 相似文献
10.
Owing to the extreme quantiles involved, standard control charts are very sensitive to the effects of parameter estimation and non-normality. More general parametric charts have been devised to deal with the latter complication and corrections have been derived to compensate for the estimation step, both under normal and parametric models. The resulting procedures offer a satisfactory solution over a broad range of underlying distributions. However, situations do occur where even such a large model is inadequate and nothing remains but to consider non- parametric charts. In principle, these form ideal solutions, but the problem is that huge sample sizes are required for the estimation step. Otherwise the resulting stochastic error is so large that the chart is very unstable, a disadvantage that seems to outweigh the advantage of avoiding the model error from the parametric case. Here we analyse under what conditions non-parametric charts actually become feasible alternatives for their parametric counterparts. In particular, corrected versions are suggested for which a possible change point is reached at sample sizes that are markedly less huge (but still larger than the customary range). These corrections serve to control the behaviour during in-control (markedly wrong outcomes of the estimates only occur sufficiently rarely). The price for this protection will clearly be some loss of detection power during out-of-control. A change point comes in view as soon as this loss can be made sufficiently small. 相似文献