全文获取类型
收费全文 | 564篇 |
免费 | 8篇 |
专业分类
管理学 | 61篇 |
民族学 | 3篇 |
人口学 | 8篇 |
丛书文集 | 15篇 |
理论方法论 | 8篇 |
综合类 | 62篇 |
社会学 | 42篇 |
统计学 | 373篇 |
出版年
2024年 | 1篇 |
2023年 | 2篇 |
2022年 | 3篇 |
2021年 | 6篇 |
2020年 | 6篇 |
2019年 | 15篇 |
2018年 | 22篇 |
2017年 | 34篇 |
2016年 | 13篇 |
2015年 | 14篇 |
2014年 | 24篇 |
2013年 | 167篇 |
2012年 | 31篇 |
2011年 | 16篇 |
2010年 | 14篇 |
2009年 | 33篇 |
2008年 | 21篇 |
2007年 | 24篇 |
2006年 | 31篇 |
2005年 | 12篇 |
2004年 | 14篇 |
2003年 | 11篇 |
2002年 | 13篇 |
2001年 | 4篇 |
2000年 | 6篇 |
1999年 | 2篇 |
1998年 | 6篇 |
1997年 | 2篇 |
1994年 | 1篇 |
1993年 | 1篇 |
1992年 | 4篇 |
1991年 | 2篇 |
1990年 | 2篇 |
1988年 | 3篇 |
1987年 | 2篇 |
1985年 | 2篇 |
1984年 | 1篇 |
1983年 | 2篇 |
1982年 | 2篇 |
1981年 | 1篇 |
1979年 | 2篇 |
排序方式: 共有572条查询结果,搜索用时 9 毫秒
411.
Although the normality assumption has been regarded as a mathematical convenience for inferential purposes due to its nice distributional properties, there has been a growing interest regarding generalized classes of distributions that span a much broader spectrum in terms of symmetry and peakedness behavior. In this respect, Fleishman's power polynomial method seems to have been gaining popularity in statistical theory and practice because of its flexibility and ease of execution. In this article, we conduct multiple imputation for univariate continuous data under Fleishman polynomials to explore the extent to which this procedure works properly. We also make comparisons with normal imputation models via widely accepted accuracy and precision measures using simulated data that exhibit different distributional features as characterized by competing specifications of the third and fourth moments. Finally, we discuss generalizations to the multivariate case. Multiple imputation under power polynomials that cover most of the feasible area in the skewness-elongation plane appears to have substantial potential of capturing real missing-data trends. 相似文献
412.
Shakir Hussain Mohammed A. Mohammed M. Sayeed Haque Roger Holder John Macleod Richard Hobbs 《统计学通讯:模拟与计算》2013,42(9):1779-1784
Multiple Imputation (MI) is an established approach for handling missing values. We show that MI for continuous data under the multivariate normal assumption is susceptible to generating implausible values. Our proposed remedy, is to: (1) transform the observed data into quantiles of the standard normal distribution; (2) obtain a functional relationship between the observed data and it's corresponding standard normal quantiles; (3) undertake MI using the quantiles produced in step 1; and finally, (4) use the functional relationship to transform the imputations into their original domain. In conclusion, our approach safeguards MI from imputing implausible values. 相似文献
413.
Fractional regression hot deck imputation (FRHDI) imputes multiple values for each instance of a missing dependent variable. The imputed values are equal to the predicted value plus multiple random residuals. Fractional weights enable variance estimation and preserve correlations. In some circumstances with some starting weight values, existing procedures for computing FRHDI weights can produce negative values. We discuss procedures for constructing non-negative adjusted fractional weights for FRHDI and study performance of the algorithm using simulation. The algorithm can be used effectively with FRDHI procedures for handling missing data in the context of a complex sample survey. 相似文献
414.
As the Gibbs sampler has become one of the standard tools in computing, the practice of burn-in is almost the default option. Because it takes a certain number of iterations for the initial distribution to reach stationarity, supporters of burn-in will throw away an initial segment of the samples and argue that such a practice ensures unbiasedness. Running time analysis studies the question of how many samples to be thrown away. Basically, it equates the number of iterations to stationarity with the number of initial samples to be discarded. However, many practitioners have found that burn-in wastes potentially useful samples and the practice is inefficient, and thus unnecessary. For the example considered, a single chain without burn-in offers both efficiency and accuracy superior to multiple chains with burn-in. We show that the Gibbs sampler uses odds to generate samples. Because the correct odds are used from the onset of the iterative process, the observations generated by the Gibbs sampler are identically distributed as the target distribution; thus throwing away those valid samples is wasteful. When the chain of distributions and the trajectory (sample path) of the chain are considered based on their separate merits, the disagreement can be settled. We advocate carefully choosing the initial state, but without burn-in to quicken the formation of the stationary distribution. 相似文献
415.
In this article, the design-oriented two-stage multiple three-decision procedure is proposed to classify a set of normal populations with respect to a control under heteroscedasticity. The statistical tables of percentage points and the power-related design constants, to implement our new two-stage procedure, are given. Sometimes when the sample for the second stage is not available, the one-stage data analysis procedure is proposed. Classifying a treatment better than control when it is actually worse (and vice versa) is known as type III error. Both the two-stage and one-stage procedures control the type III error rate at a specified level. The relationship between the two-stage and one-stage procedures is discussed. Finally, the application of the proposed procedures is illustrated with an example. 相似文献
416.
Yunxi Zhang Ye Lin George Baum Karen M. Basen-Engquist Michael D. Swartz 《统计学通讯:模拟与计算》2013,42(8):2523-2537
ABSTRACTMissing data are commonly encountered in self-reported measurements and questionnaires. It is crucial to treat missing values using appropriate method to avoid bias and reduction of power. Various types of imputation methods exist, but it is not always clear which method is preferred for imputation of data with non-normal variables. In this paper, we compared four imputation methods: mean imputation, quantile imputation, multiple imputation, and quantile regression multiple imputation (QRMI), using both simulated and real data investigating factors affecting self-efficacy in breast cancer survivors. The results displayed an advantage of using multiple imputation, especially QRMI when data are not normal. 相似文献
417.
A set of tables is presented enabling one to design multiple (Group sequential) sampling plans when various entry parameters are given. Table yielding item by item sequential sampling plans indexed by (AQL, AOQL) is also presented. 相似文献
418.
The Benjamini–Hochberg procedure is widely used in multiple comparisons. Previous power results for this procedure have been based on simulations. This article produces theoretical expressions for expected power. To derive them, we make assumptions about the number of hypotheses being tested, which null hypotheses are true, which are false, and the distributions of the test statistics under each null and alternative. We use these assumptions to derive bounds for multiple dimensional rejection regions. With these bounds and a permanent based representation of the joint density function of the largest p-values, we use the law of total probability to derive the distribution of the total number of rejections. We derive the joint distribution of the total number of rejections and the number of rejections when the null hypothesis is true. We give an analytic expression for the expected power for a false discovery rate procedure that assumes the hypotheses are independent. 相似文献
419.
In an earlier article (Bai et al., 1999), the problem of simultaneous estimation of the number of signals and frequencies of multiple sinusoids is considered in the case that some observations are missing. The number of signals is estimated with an information theoretic criterion and the frequencies are estimated with eigenvariation linear prediction. Asymptotic properties of the procedure are investigated but the Monte Carlo simulation is not performed. In this article, a slightly different but scale invariant criterion for detection is proposed and the estimation of frequencies remains the same. Asymptotic properties of this new procedure are provided. Monte Carlo Simulation for both procedures is carried out. Furthermore, comparison on the real signals is also given. 相似文献
420.
The area under the Receiver Operating Characteristic (ROC) curve (AUC) and related summary indices are widely used for assessment of accuracy of an individual and comparison of performances of several diagnostic systems in many areas including studies of human perception, decision making, and the regulatory approval process for new diagnostic technologies. Many investigators have suggested implementing the bootstrap approach to estimate variability of AUC-based indices. Corresponding bootstrap quantities are typically estimated by sampling a bootstrap distribution. Such a process, frequently termed Monte Carlo bootstrap, is often computationally burdensome and imposes an additional sampling error on the resulting estimates. In this article, we demonstrate that the exact or ideal (sampling error free) bootstrap variances of the nonparametric estimator of AUC can be computed directly, i.e., avoiding resampling of the original data, and we develop easy-to-use formulas to compute them. We derive the formulas for the variances of the AUC corresponding to a single given or random reader, and to the average over several given or randomly selected readers. The derived formulas provide an algorithm for computing the ideal bootstrap variances exactly and hence improve many bootstrap methods proposed earlier for analyzing AUCs by eliminating the sampling error and sometimes burdensome computations associated with a Monte Carlo (MC) approximation. In addition, the availability of closed-form solutions provides the potential for an analytical assessment of the properties of bootstrap variance estimators. Applications of the proposed method are shown on two experimentally ascertained datasets that illustrate settings commonly encountered in diagnostic imaging. In the context of the two examples we also demonstrate the magnitude of the effect of the sampling error of the MC estimators on the resulting inferences. 相似文献