首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   2332篇
  免费   89篇
  国内免费   33篇
管理学   222篇
民族学   7篇
人口学   58篇
丛书文集   85篇
理论方法论   99篇
综合类   520篇
社会学   29篇
统计学   1434篇
  2023年   17篇
  2022年   17篇
  2021年   30篇
  2020年   49篇
  2019年   65篇
  2018年   75篇
  2017年   125篇
  2016年   72篇
  2015年   65篇
  2014年   87篇
  2013年   497篇
  2012年   185篇
  2011年   99篇
  2010年   106篇
  2009年   80篇
  2008年   91篇
  2007年   96篇
  2006年   94篇
  2005年   80篇
  2004年   60篇
  2003年   63篇
  2002年   58篇
  2001年   54篇
  2000年   37篇
  1999年   39篇
  1998年   22篇
  1997年   22篇
  1996年   16篇
  1995年   23篇
  1994年   17篇
  1993年   15篇
  1992年   21篇
  1991年   14篇
  1990年   4篇
  1989年   5篇
  1988年   10篇
  1987年   7篇
  1986年   5篇
  1985年   6篇
  1984年   6篇
  1983年   6篇
  1982年   2篇
  1981年   4篇
  1980年   3篇
  1979年   1篇
  1978年   1篇
  1977年   2篇
  1975年   1篇
排序方式: 共有2454条查询结果,搜索用时 31 毫秒
1.
ABSTRACT

The cost and time of pharmaceutical drug development continue to grow at rates that many say are unsustainable. These trends have enormous impact on what treatments get to patients, when they get them and how they are used. The statistical framework for supporting decisions in regulated clinical development of new medicines has followed a traditional path of frequentist methodology. Trials using hypothesis tests of “no treatment effect” are done routinely, and the p-value < 0.05 is often the determinant of what constitutes a “successful” trial. Many drugs fail in clinical development, adding to the cost of new medicines, and some evidence points blame at the deficiencies of the frequentist paradigm. An unknown number effective medicines may have been abandoned because trials were declared “unsuccessful” due to a p-value exceeding 0.05. Recently, the Bayesian paradigm has shown utility in the clinical drug development process for its probability-based inference. We argue for a Bayesian approach that employs data from other trials as a “prior” for Phase 3 trials so that synthesized evidence across trials can be utilized to compute probability statements that are valuable for understanding the magnitude of treatment effect. Such a Bayesian paradigm provides a promising framework for improving statistical inference and regulatory decision making.  相似文献   
2.
针对我国政府、企业和银行等金融机构共同关注的债转股问题,基于债务协商谈判思想,建立部分债务股权互换模型,计算公司证券价格,探讨了债转股对公司价值、破产概率、破产损失成本和资本结构的影响,给出了银行等债权人愿意债转股的充分条件。结果表明:在事先破产清算协议贷款下,事后全部债转股总能提高公司股权价值,但并不一定能提高债券价值。只有其协商谈判能力满足一定条件,公司债权人才愿意事后选择债转股,实现帕累托改进、提高社会福利水平。其次,在公司股东协商谈判能力的一定范围内,部分债转股能提高公司价值,其最优转股债息比例随着公司资产风险的增大而增加。再次,债转股能降低公司破产风险和破产损失成本,但同时也提高了债券风险溢价。最后,随着股东谈判能力增强,最优协商转股债务比例、杠杆率都减少,而债券风险溢价增大。本文所得结果对我国政府、企业和银行如何实施债转股提供理论参考和实践指导。  相似文献   
3.
《Risk analysis》2018,38(9):1988-2009
Harbor seals in Iliamna Lake, Alaska, are a small, isolated population, and one of only two freshwater populations of harbor seals in the world, yet little is known about their abundance or risk for extinction. Bayesian hierarchical models were used to estimate abundance and trend of this population. Observational models were developed from aerial survey and harvest data, and they included effects for time of year and time of day on survey counts. Underlying models of abundance and trend were based on a Leslie matrix model that used prior information on vital rates from the literature. We developed three scenarios for variability in the priors and used them as part of a sensitivity analysis. The models were fitted using Markov chain Monte Carlo methods. The population production rate implied by the vital rate estimates was about 5% per year, very similar to the average annual harvest rate. After a period of growth in the 1980s, the population appears to be relatively stable at around 400 individuals. A population viability analysis assessing the risk of quasi‐extinction, defined as any reduction to 50 animals or below in the next 100 years, ranged from 1% to 3%, depending on the prior scenario. Although this is moderately low risk, it does not include genetic or catastrophic environmental events, which may have occurred to the population in the past, so our results should be applied cautiously.  相似文献   
4.
Abstract

Confidence sets, p values, maximum likelihood estimates, and other results of non-Bayesian statistical methods may be adjusted to favor sampling distributions that are simple compared to others in the parametric family. The adjustments are derived from a prior likelihood function previously used to adjust posterior distributions.  相似文献   
5.
Keisuke Himoto 《Risk analysis》2020,40(6):1124-1138
Post-earthquake fires are high-consequence events with extensive damage potential. They are also low-frequency events, so their nature remains underinvestigated. One difficulty in modeling post-earthquake ignition probabilities is reducing the model uncertainty attributed to the scarce source data. The data scarcity problem has been resolved by pooling the data indiscriminately collected from multiple earthquakes. However, this approach neglects the inter-earthquake heterogeneity in the regional and seasonal characteristics, which is indispensable for risk assessment of future post-earthquake fires. Thus, the present study analyzes the post-earthquake ignition probabilities of five major earthquakes in Japan from 1995 to 2016 (1995 Kobe, 2003 Tokachi-oki, 2004 Niigata–Chuetsu, 2011 Tohoku, and 2016 Kumamoto earthquakes) by a hierarchical Bayesian approach. As the ignition causes of earthquakes share a certain commonality, common prior distributions were assigned to the parameters, and samples were drawn from the target posterior distribution of the parameters by a Markov chain Monte Carlo simulation. The results of the hierarchical model were comparatively analyzed with those of pooled and independent models. Although the pooled and hierarchical models were both robust in comparison with the independent model, the pooled model underestimated the ignition probabilities of earthquakes with few data samples. Among the tested models, the hierarchical model was least affected by the source-to-source variability in the data. The heterogeneity of post-earthquake ignitions with different regional and seasonal characteristics has long been desired in the modeling of post-earthquake ignition probabilities but has not been properly considered in the existing approaches. The presented hierarchical Bayesian approach provides a systematic and rational framework to effectively cope with this problem, which consequently enhances the statistical reliability and stability of estimating post-earthquake ignition probabilities.  相似文献   
6.
Kun Xie  Kaan Ozbay  Hong Yang  Di Yang 《Risk analysis》2019,39(6):1342-1357
The widely used empirical Bayes (EB) and full Bayes (FB) methods for before–after safety assessment are sometimes limited because of the extensive data needs from additional reference sites. To address this issue, this study proposes a novel before–after safety evaluation methodology based on survival analysis and longitudinal data as an alternative to the EB/FB method. A Bayesian survival analysis (SARE) model with a random effect term to address the unobserved heterogeneity across sites is developed. The proposed survival analysis method is validated through a simulation study before its application. Subsequently, the SARE model is developed in a case study to evaluate the safety effectiveness of a recent red‐light‐running photo enforcement program in New Jersey. As demonstrated in the simulation and the case study, the survival analysis can provide valid estimates using only data from treated sites, and thus its results will not be affected by the selection of defective or insufficient reference sites. In addition, the proposed approach can take into account the censored data generated due to the transition from the before period to the after period, which has not been previously explored in the literature. Using individual crashes as units of analysis, survival analysis can incorporate longitudinal covariates such as the traffic volume and weather variation, and thus can explicitly account for the potential temporal heterogeneity.  相似文献   
7.
Owing to the extreme quantiles involved, standard control charts are very sensitive to the effects of parameter estimation and non-normality. More general parametric charts have been devised to deal with the latter complication and corrections have been derived to compensate for the estimation step, both under normal and parametric models. The resulting procedures offer a satisfactory solution over a broad range of underlying distributions. However, situations do occur where even such a large model is inadequate and nothing remains but to consider non- parametric charts. In principle, these form ideal solutions, but the problem is that huge sample sizes are required for the estimation step. Otherwise the resulting stochastic error is so large that the chart is very unstable, a disadvantage that seems to outweigh the advantage of avoiding the model error from the parametric case. Here we analyse under what conditions non-parametric charts actually become feasible alternatives for their parametric counterparts. In particular, corrected versions are suggested for which a possible change point is reached at sample sizes that are markedly less huge (but still larger than the customary range). These corrections serve to control the behaviour during in-control (markedly wrong outcomes of the estimates only occur sufficiently rarely). The price for this protection will clearly be some loss of detection power during out-of-control. A change point comes in view as soon as this loss can be made sufficiently small.  相似文献   
8.
技术创新是制造企业发展的原动力和核心竞争力.培养员工的创新精神,营造有利于创新的企业文化环境,制定符合制造企业发展要求的创新策略,建立鼓励创新的管理机制和创新能力评价体系是提高制造企业创新能力的关键.  相似文献   
9.
Summary Weak disintegrations are investigated from various points of view. Kolmogorov's definition of conditional probability is critically analysed, and it is noted how the notion of disintegrability plays some role in connecting Kolmogorov's definition with the one given in line with de Finetti's coherence principle. Conditions are given, on the domain of a prevision, implying the equivalence between weak disintegrability and conglomerability. Moreover, weak sintegrations are characterized in terms of coherence, in de Finetti's sense, of, a suitable function. This fact enables us to give, an interpretation of weak disintegrability as a form of “preservation of coherence”. The previous results are also applied to a hypothetical inferential problem. In particular, an inference is shown to be coherent, in the sense of Heath and Sudderth, if and only if a suitable function is coherent, in de Finetti's sense. Research partially supported by: M.U.R.S.T. 40% “Problemi di inferenza pura”.  相似文献   
10.
Summary.  In studies to assess the accuracy of a screening test, often definitive disease assessment is too invasive or expensive to be ascertained on all the study subjects. Although it may be more ethical or cost effective to ascertain the true disease status with a higher rate in study subjects where the screening test or additional information is suggestive of disease, estimates of accuracy can be biased in a study with such a design. This bias is known as verification bias. Verification bias correction methods that accommodate screening tests with binary or ordinal responses have been developed; however, no verification bias correction methods exist for tests with continuous results. We propose and compare imputation and reweighting bias-corrected estimators of true and false positive rates, receiver operating characteristic curves and area under the receiver operating characteristic curve for continuous tests. Distribution theory and simulation studies are used to compare the proposed estimators with respect to bias, relative efficiency and robustness to model misspecification. The bias correction estimators proposed are applied to data from a study of screening tests for neonatal hearing loss.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号