首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   563篇
  免费   8篇
管理学   61篇
民族学   3篇
人口学   8篇
丛书文集   15篇
理论方法论   8篇
综合类   61篇
社会学   42篇
统计学   373篇
  2024年   1篇
  2023年   2篇
  2022年   3篇
  2021年   6篇
  2020年   6篇
  2019年   15篇
  2018年   22篇
  2017年   34篇
  2016年   12篇
  2015年   14篇
  2014年   24篇
  2013年   167篇
  2012年   31篇
  2011年   16篇
  2010年   14篇
  2009年   33篇
  2008年   21篇
  2007年   24篇
  2006年   31篇
  2005年   12篇
  2004年   14篇
  2003年   11篇
  2002年   13篇
  2001年   4篇
  2000年   6篇
  1999年   2篇
  1998年   6篇
  1997年   2篇
  1994年   1篇
  1993年   1篇
  1992年   4篇
  1991年   2篇
  1990年   2篇
  1988年   3篇
  1987年   2篇
  1985年   2篇
  1984年   1篇
  1983年   2篇
  1982年   2篇
  1981年   1篇
  1979年   2篇
排序方式: 共有571条查询结果,搜索用时 296 毫秒
131.
In this paper, a one-stage multiple comparison procedures with the average for exponential location parameters based on the doubly censored sample under heteroscedasticity is proposed. These intervals can be used to identify a subset which includes all no-worse-than-the-average treatments in an experimental design and to identify better-than-the-average, worse-than-the-average and not-much-different-from-the-average products in agriculture, emerging market, pharmaceutical industries. The critical values are tabulated in a table for practical use. A simulation study on the confidence length and coverage probabilities is done. At last, an example of comparing four drugs in the treatment of leukemia is given to demonstrate the proposed procedures.  相似文献   
132.
The usual approach for diagnosing collinearity proceeds by centering and standardizing the regressors. The sample correlation matrix of the predictors is then the basic tool for describing approximate linear combinations that may distort the conclusions of a standard least-square analysis. However, as indicated by several authors, centering may eventually fail to detect the sources of ill-conditioning. In spite of this earlier claim, there does not seem to be in the literature a fully clear explanation of the reasons for this bad potential behavior of the traditional strategy for analyzing collinearity. This note studies this issue in some detail. Results derived are motivated by the analysis of a well-known real dataset. Practical conclusions are illustrated with several examples.  相似文献   
133.
This article considers a Bayesian hierarchical model for multiple comparisons in linear models where the population medians satisfy a simple order restriction. Representing the asymmetric Laplace distribution as a scale mixture of normals with an exponential mixing density and a continuous prior restricted to order constraints, a Gibbs sampling algorithm for parameter estimation and simultaneous comparison of treatment medians is proposed. Posterior probabilities of all possible hypotheses on the equality/inequality of treatment medians are estimated using Bayes factors that are computed via the Savage-Dickey density ratios. The performance of the proposed median-based model is investigated in the simulated and real datasets. The results show that the proposed method can outperform the commonly used method that is based on treatment means, when data are from nonnormal distributions.  相似文献   
134.

The studied topic is motivated by the problem of interlaboratory comparisons. This paper focuses on the confidence interval estimation of the between group variance in the unbalanced heteroscedastic one-way random effects model. Several interval estimators are proposed and compared by means of the simulation study. The most recommended (safest) is the confidence interval based on Bonferroni's inequality.  相似文献   
135.
In this paper we present a geometric programming approach for determining the inventory policy for multiple items having varying order cost, which is a continuous function of the order quantity, and a limit on the total average inventory of all items. Our model is a generalization of that of Gupta and Gupta for unrestricted single-item order quantity model with varying order cost and assumes the same order cost function. This cost function relates well to real-life situations since it increases as the order quantity increases and, at the same time, it is easy to handle when deducing previous work as special cases of our model since it is easily reducible to a constant. An example is solved to illustrate the method.  相似文献   
136.
Recently, the field of multiple hypothesis testing has experienced a great expansion, basically because of the new methods developed in the field of genomics. These new methods allow scientists to simultaneously process thousands of hypothesis tests. The frequentist approach to this problem is made by using different testing error measures that allow to control the Type I error rate at a certain desired level. Alternatively, in this article, a Bayesian hierarchical model based on mixture distributions and an empirical Bayes approach are proposed in order to produce a list of rejected hypotheses that will be declared significant and interesting for a more detailed posterior analysis. In particular, we develop a straightforward implementation of a Gibbs sampling scheme where all the conditional posterior distributions are explicit. The results are compared with the frequentist False Discovery Rate (FDR) methodology. Simulation examples show that our model improves the FDR procedure in the sense that it diminishes the percentage of false negatives keeping an acceptable percentage of false positives.  相似文献   
137.
Summary.  We consider the problem of estimating the proportion of true null hypotheses, π 0, in a multiple-hypothesis set-up. The tests are based on observed p -values. We first review published estimators based on the estimator that was suggested by Schweder and Spjøtvoll. Then we derive new estimators based on nonparametric maximum likelihood estimation of the p -value density, restricting to decreasing and convex decreasing densities. The estimators of π 0 are all derived under the assumption of independent test statistics. Their performance under dependence is investigated in a simulation study. We find that the estimators are relatively robust with respect to the assumption of independence and work well also for test statistics with moderate dependence.  相似文献   
138.
Summary.  Top coding of extreme values of variables like income is a common method of statistical disclosure control, but it creates problems for the data analyst. The paper proposes two alternative methods to top coding for statistical disclosure control that are based on multiple imputation. We show in simulation studies that the multiple-imputation methods provide better inferences of the publicly released data than top coding, using straightforward multiple-imputation methods of analysis, while maintaining good statistical disclosure control properties. We illustrate the methods on data from the 1995 Chinese household income project.  相似文献   
139.
This paper considers the problem of identifying which treatments are strictly worse than the best treatment or treatments in a one-way layout, which has many important applications in screening trials for new product development. A procedure is proposed that selects a subset of the treatments containing only treatments that are known to be strictly worse than the best treatment or treatments. In addition, simultaneous confidence intervals are obtained which provide upper bounds on how inferior the treatments are compared with these best treatments. In this way, the new procedure shares the characteristics of both subset selection procedures and multiple comparison procedures. Some tables of critical points are provided for implementing the new procedure, and some examples of its use are given.  相似文献   
140.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号