首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   563篇
  免费   8篇
管理学   61篇
民族学   3篇
人口学   8篇
丛书文集   15篇
理论方法论   8篇
综合类   61篇
社会学   42篇
统计学   373篇
  2024年   1篇
  2023年   2篇
  2022年   3篇
  2021年   6篇
  2020年   6篇
  2019年   15篇
  2018年   22篇
  2017年   34篇
  2016年   12篇
  2015年   14篇
  2014年   24篇
  2013年   167篇
  2012年   31篇
  2011年   16篇
  2010年   14篇
  2009年   33篇
  2008年   21篇
  2007年   24篇
  2006年   31篇
  2005年   12篇
  2004年   14篇
  2003年   11篇
  2002年   13篇
  2001年   4篇
  2000年   6篇
  1999年   2篇
  1998年   6篇
  1997年   2篇
  1994年   1篇
  1993年   1篇
  1992年   4篇
  1991年   2篇
  1990年   2篇
  1988年   3篇
  1987年   2篇
  1985年   2篇
  1984年   1篇
  1983年   2篇
  1982年   2篇
  1981年   1篇
  1979年   2篇
排序方式: 共有571条查询结果,搜索用时 218 毫秒
431.
As the Gibbs sampler has become one of the standard tools in computing, the practice of burn-in is almost the default option. Because it takes a certain number of iterations for the initial distribution to reach stationarity, supporters of burn-in will throw away an initial segment of the samples and argue that such a practice ensures unbiasedness. Running time analysis studies the question of how many samples to be thrown away. Basically, it equates the number of iterations to stationarity with the number of initial samples to be discarded. However, many practitioners have found that burn-in wastes potentially useful samples and the practice is inefficient, and thus unnecessary. For the example considered, a single chain without burn-in offers both efficiency and accuracy superior to multiple chains with burn-in. We show that the Gibbs sampler uses odds to generate samples. Because the correct odds are used from the onset of the iterative process, the observations generated by the Gibbs sampler are identically distributed as the target distribution; thus throwing away those valid samples is wasteful. When the chain of distributions and the trajectory (sample path) of the chain are considered based on their separate merits, the disagreement can be settled. We advocate carefully choosing the initial state, but without burn-in to quicken the formation of the stationary distribution.  相似文献   
432.
In this article, the design-oriented two-stage multiple three-decision procedure is proposed to classify a set of normal populations with respect to a control under heteroscedasticity. The statistical tables of percentage points and the power-related design constants, to implement our new two-stage procedure, are given. Sometimes when the sample for the second stage is not available, the one-stage data analysis procedure is proposed. Classifying a treatment better than control when it is actually worse (and vice versa) is known as type III error. Both the two-stage and one-stage procedures control the type III error rate at a specified level. The relationship between the two-stage and one-stage procedures is discussed. Finally, the application of the proposed procedures is illustrated with an example.  相似文献   
433.
ABSTRACT

Missing data are commonly encountered in self-reported measurements and questionnaires. It is crucial to treat missing values using appropriate method to avoid bias and reduction of power. Various types of imputation methods exist, but it is not always clear which method is preferred for imputation of data with non-normal variables. In this paper, we compared four imputation methods: mean imputation, quantile imputation, multiple imputation, and quantile regression multiple imputation (QRMI), using both simulated and real data investigating factors affecting self-efficacy in breast cancer survivors. The results displayed an advantage of using multiple imputation, especially QRMI when data are not normal.  相似文献   
434.
A set of tables is presented enabling one to design multiple (Group sequential) sampling plans when various entry parameters are given. Table yielding item by item sequential sampling plans indexed by (AQL, AOQL) is also presented.  相似文献   
435.
The Benjamini–Hochberg procedure is widely used in multiple comparisons. Previous power results for this procedure have been based on simulations. This article produces theoretical expressions for expected power. To derive them, we make assumptions about the number of hypotheses being tested, which null hypotheses are true, which are false, and the distributions of the test statistics under each null and alternative. We use these assumptions to derive bounds for multiple dimensional rejection regions. With these bounds and a permanent based representation of the joint density function of the largest p-values, we use the law of total probability to derive the distribution of the total number of rejections. We derive the joint distribution of the total number of rejections and the number of rejections when the null hypothesis is true. We give an analytic expression for the expected power for a false discovery rate procedure that assumes the hypotheses are independent.  相似文献   
436.
In an earlier article (Bai et al., 1999 Bai , Z. D. , Rao , C. R. , Wu , Y. H. , Zen , M. M. , Zhao , L. C. ( 1999 ). The simultaneous estimation of the number of signals and frequencies of multiple sinusods when some observations are missing: I. Asymptotics . Proc. Natl. Acad. Sci. 96 : 11,10611,110 . [CSA] [Crossref], [Web of Science ®] [Google Scholar]), the problem of simultaneous estimation of the number of signals and frequencies of multiple sinusoids is considered in the case that some observations are missing. The number of signals is estimated with an information theoretic criterion and the frequencies are estimated with eigenvariation linear prediction. Asymptotic properties of the procedure are investigated but the Monte Carlo simulation is not performed. In this article, a slightly different but scale invariant criterion for detection is proposed and the estimation of frequencies remains the same. Asymptotic properties of this new procedure are provided. Monte Carlo Simulation for both procedures is carried out. Furthermore, comparison on the real signals is also given.  相似文献   
437.
The area under the Receiver Operating Characteristic (ROC) curve (AUC) and related summary indices are widely used for assessment of accuracy of an individual and comparison of performances of several diagnostic systems in many areas including studies of human perception, decision making, and the regulatory approval process for new diagnostic technologies. Many investigators have suggested implementing the bootstrap approach to estimate variability of AUC-based indices. Corresponding bootstrap quantities are typically estimated by sampling a bootstrap distribution. Such a process, frequently termed Monte Carlo bootstrap, is often computationally burdensome and imposes an additional sampling error on the resulting estimates. In this article, we demonstrate that the exact or ideal (sampling error free) bootstrap variances of the nonparametric estimator of AUC can be computed directly, i.e., avoiding resampling of the original data, and we develop easy-to-use formulas to compute them. We derive the formulas for the variances of the AUC corresponding to a single given or random reader, and to the average over several given or randomly selected readers. The derived formulas provide an algorithm for computing the ideal bootstrap variances exactly and hence improve many bootstrap methods proposed earlier for analyzing AUCs by eliminating the sampling error and sometimes burdensome computations associated with a Monte Carlo (MC) approximation. In addition, the availability of closed-form solutions provides the potential for an analytical assessment of the properties of bootstrap variance estimators. Applications of the proposed method are shown on two experimentally ascertained datasets that illustrate settings commonly encountered in diagnostic imaging. In the context of the two examples we also demonstrate the magnitude of the effect of the sampling error of the MC estimators on the resulting inferences.  相似文献   
438.
Asymptotic expansions of the joint distributions of functions of sample means and central moments up to an arbitrary order in multiple populations are given by Edgeworth expansions. The asymptotic distributions of the parameter estimators in moment structures under null/fixed alternative hypotheses and the chi-square statistics based on asymptotically distribution-free theory under fixed alternatives are given as applications of the above results. Asymptotic expansions of the null distributions of the chi-square statistics are also derived. For parameter estimators with the chi-square statistic, the linearized estimators are dealt with as well as fully iterated estimators.  相似文献   
439.
The sample coordination problem involves maximization or minimization of overlap of sampling units in different/repeated surveys. Several optimal techniques using transportation theory, controlled rounding, and controlled selection have been suggested in literature to solve the sample coordination problem. In this article, using the multiple objective programming, we propose a method for sample coordination which facilitates variance estimation using the Horvitz–Thompson estimator. The proposed procedure can be applied to any two-sample surveys having identical universe and stratification. Some examples are discussed to demonstrate the utility of the proposed procedure.  相似文献   
440.
Robust estimation methods can effectively eliminate the influence of gross errors on parameter estimation. However, the extent of gross errors eliminated (EGEE) by robust estimation methods is far-reaching. This article presents a new approach to determine EGEE by robust estimation method. Taking multiple linear regressions (2–5) as examples, simulation experiments were conducted to compare the EGEE of 14 frequently used robust estimation methods. This article confirms several additional efficient robust estimation methods for dealing with multiple linear regressions, as well as the minimum number of observations needed to eliminate gross errors in certain ranges completely.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号