首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   3015篇
  免费   63篇
  国内免费   19篇
管理学   259篇
民族学   3篇
人口学   35篇
丛书文集   27篇
理论方法论   84篇
综合类   310篇
社会学   56篇
统计学   2323篇
  2023年   13篇
  2022年   15篇
  2021年   19篇
  2020年   55篇
  2019年   115篇
  2018年   131篇
  2017年   219篇
  2016年   75篇
  2015年   78篇
  2014年   89篇
  2013年   795篇
  2012年   243篇
  2011年   75篇
  2010年   87篇
  2009年   81篇
  2008年   106篇
  2007年   107篇
  2006年   98篇
  2005年   73篇
  2004年   64篇
  2003年   64篇
  2002年   69篇
  2001年   54篇
  2000年   45篇
  1999年   49篇
  1998年   31篇
  1997年   27篇
  1996年   23篇
  1995年   18篇
  1994年   19篇
  1993年   18篇
  1992年   25篇
  1991年   18篇
  1990年   8篇
  1989年   6篇
  1988年   15篇
  1987年   7篇
  1986年   8篇
  1985年   12篇
  1984年   8篇
  1983年   10篇
  1982年   6篇
  1981年   6篇
  1980年   5篇
  1979年   3篇
  1978年   1篇
  1977年   3篇
  1975年   1篇
排序方式: 共有3097条查询结果,搜索用时 0 毫秒
1.
ABSTRACT

The cost and time of pharmaceutical drug development continue to grow at rates that many say are unsustainable. These trends have enormous impact on what treatments get to patients, when they get them and how they are used. The statistical framework for supporting decisions in regulated clinical development of new medicines has followed a traditional path of frequentist methodology. Trials using hypothesis tests of “no treatment effect” are done routinely, and the p-value < 0.05 is often the determinant of what constitutes a “successful” trial. Many drugs fail in clinical development, adding to the cost of new medicines, and some evidence points blame at the deficiencies of the frequentist paradigm. An unknown number effective medicines may have been abandoned because trials were declared “unsuccessful” due to a p-value exceeding 0.05. Recently, the Bayesian paradigm has shown utility in the clinical drug development process for its probability-based inference. We argue for a Bayesian approach that employs data from other trials as a “prior” for Phase 3 trials so that synthesized evidence across trials can be utilized to compute probability statements that are valuable for understanding the magnitude of treatment effect. Such a Bayesian paradigm provides a promising framework for improving statistical inference and regulatory decision making.  相似文献   
2.
If a population contains many zero values and the sample size is not very large, the traditional normal approximation‐based confidence intervals for the population mean may have poor coverage probabilities. This problem is substantially reduced by constructing parametric likelihood ratio intervals when an appropriate mixture model can be found. In the context of survey sampling, however, there is a general preference for making minimal assumptions about the population under study. The authors have therefore investigated the coverage properties of nonparametric empirical likelihood confidence intervals for the population mean. They show that under a variety of hypothetical populations, these intervals often outperformed parametric likelihood intervals by having more balanced coverage rates and larger lower bounds. The authors illustrate their methodology using data from the Canadian Labour Force Survey for the year 2000.  相似文献   
3.
Merging information for semiparametric density estimation   总被引:1,自引:0,他引:1  
Summary.  The density ratio model specifies that the likelihood ratio of m −1 probability density functions with respect to the m th is of known parametric form without reference to any parametric model. We study the semiparametric inference problem that is related to the density ratio model by appealing to the methodology of empirical likelihood. The combined data from all the samples leads to more efficient kernel density estimators for the unknown distributions. We adopt variants of well-established techniques to choose the smoothing parameter for the density estimators proposed.  相似文献   
4.
Owing to the extreme quantiles involved, standard control charts are very sensitive to the effects of parameter estimation and non-normality. More general parametric charts have been devised to deal with the latter complication and corrections have been derived to compensate for the estimation step, both under normal and parametric models. The resulting procedures offer a satisfactory solution over a broad range of underlying distributions. However, situations do occur where even such a large model is inadequate and nothing remains but to consider non- parametric charts. In principle, these form ideal solutions, but the problem is that huge sample sizes are required for the estimation step. Otherwise the resulting stochastic error is so large that the chart is very unstable, a disadvantage that seems to outweigh the advantage of avoiding the model error from the parametric case. Here we analyse under what conditions non-parametric charts actually become feasible alternatives for their parametric counterparts. In particular, corrected versions are suggested for which a possible change point is reached at sample sizes that are markedly less huge (but still larger than the customary range). These corrections serve to control the behaviour during in-control (markedly wrong outcomes of the estimates only occur sufficiently rarely). The price for this protection will clearly be some loss of detection power during out-of-control. A change point comes in view as soon as this loss can be made sufficiently small.  相似文献   
5.
We discuss Bayesian analyses of traditional normal-mixture models for classification and discrimination. The development involves application of an iterative resampling approach to Monte Carlo inference, commonly called Gibbs sampling, and demonstrates routine application. We stress the benefits of exact analyses over traditional classification and discrimination techniques, including the ease with which such analyses may be performed in a quite general setting, with possibly several normal-mixture components having different covariance matrices, the computation of exact posterior classification probabilities for observed data and for future cases to be classified, and posterior distributions for these probabilities that allow for assessment of second-level uncertainties in classification.  相似文献   
6.
Laud et al. (1993) describe a method for random variate generation from D-distributions. In this paper an alternative method using substitution sampling is given. An algorithm for the random variate generation from SD-distributions is also given.  相似文献   
7.
The well-known chi-squared goodness-of-fit test for a multinomial distribution is generally biased when the observations are subject to misclassification. In Pardo and Zografos (2000) the problem was considered using a double sampling scheme and ø-divergence test statistics. A new problem appears if the null hypothesis is not simple because it is necessary to give estimators for the unknown parameters. In this paper the minimum ø-divergence estimators are considered and some of their properties are established. The proposed ø-divergence test statistics are obtained by calculating ø-divergences between probability density functions and by replacing parameters by their minimum ø-divergence estimators in the derived expressions. Asymptotic distributions of the new test statistics are also obtained. The testing procedure is illustrated with an example.  相似文献   
8.
论科技期刊的品牌资本   总被引:1,自引:0,他引:1  
科技期刊的品牌资本是其生存和发展的关键因素,品牌资本体现了社会效益与经济效益的同一性。品牌资本的价值回归是一个缓慢但却是相当稳定的过程。通过抽样调查和方差分析,定量说明了上述论点的正确性。  相似文献   
9.
Not having a variance estimator is a seriously weak point of a sampling design from a practical perspective. This paper provides unbiased variance estimators for several sampling designs based on inverse sampling, both with and without an adaptive component. It proposes a new design, which is called the general inverse sampling design, that avoids sampling an infeasibly large number of units. The paper provide estimators for this design as well as its adaptive modification. A simple artificial example is used to demonstrate the computations. The adaptive and non‐adaptive designs are compared using simulations based on real data sets. The results indicate that, for appropriate populations, the adaptive version can have a substantial variance reduction compared with the non‐adaptive version. Also, adaptive general inverse sampling with a limitation on the initial sample size has a greater variance reduction than without the limitation.  相似文献   
10.
Summary Weak disintegrations are investigated from various points of view. Kolmogorov's definition of conditional probability is critically analysed, and it is noted how the notion of disintegrability plays some role in connecting Kolmogorov's definition with the one given in line with de Finetti's coherence principle. Conditions are given, on the domain of a prevision, implying the equivalence between weak disintegrability and conglomerability. Moreover, weak sintegrations are characterized in terms of coherence, in de Finetti's sense, of, a suitable function. This fact enables us to give, an interpretation of weak disintegrability as a form of “preservation of coherence”. The previous results are also applied to a hypothetical inferential problem. In particular, an inference is shown to be coherent, in the sense of Heath and Sudderth, if and only if a suitable function is coherent, in de Finetti's sense. Research partially supported by: M.U.R.S.T. 40% “Problemi di inferenza pura”.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号