全文获取类型
收费全文 | 1992篇 |
免费 | 61篇 |
国内免费 | 17篇 |
专业分类
管理学 | 167篇 |
民族学 | 6篇 |
人才学 | 1篇 |
人口学 | 15篇 |
丛书文集 | 157篇 |
理论方法论 | 39篇 |
综合类 | 1150篇 |
社会学 | 91篇 |
统计学 | 444篇 |
出版年
2024年 | 2篇 |
2023年 | 9篇 |
2022年 | 24篇 |
2021年 | 26篇 |
2020年 | 38篇 |
2019年 | 50篇 |
2018年 | 37篇 |
2017年 | 52篇 |
2016年 | 59篇 |
2015年 | 61篇 |
2014年 | 93篇 |
2013年 | 212篇 |
2012年 | 145篇 |
2011年 | 130篇 |
2010年 | 110篇 |
2009年 | 101篇 |
2008年 | 115篇 |
2007年 | 125篇 |
2006年 | 120篇 |
2005年 | 100篇 |
2004年 | 92篇 |
2003年 | 91篇 |
2002年 | 75篇 |
2001年 | 62篇 |
2000年 | 51篇 |
1999年 | 20篇 |
1998年 | 8篇 |
1997年 | 12篇 |
1996年 | 12篇 |
1995年 | 8篇 |
1994年 | 6篇 |
1993年 | 5篇 |
1992年 | 3篇 |
1991年 | 3篇 |
1990年 | 4篇 |
1987年 | 2篇 |
1986年 | 1篇 |
1985年 | 2篇 |
1984年 | 1篇 |
1982年 | 1篇 |
1981年 | 1篇 |
1980年 | 1篇 |
排序方式: 共有2070条查询结果,搜索用时 156 毫秒
901.
《Journal of Statistical Computation and Simulation》2012,82(11):2298-2315
Principal component analysis (PCA) is a widely used statistical technique for determining subscales in questionnaire data. As in any other statistical technique, missing data may both complicate its execution and interpretation. In this study, six methods for dealing with missing data in the context of PCA are reviewed and compared: listwise deletion (LD), pairwise deletion, the missing data passive approach, regularized PCA, the expectation-maximization algorithm, and multiple imputation. Simulations show that except for LD, all methods give about equally good results for realistic percentages of missing data. Therefore, the choice of a procedure can be based on the ease of application or purely the convenience of availability of a technique. 相似文献
902.
《Journal of Statistical Computation and Simulation》2012,82(11):2123-2139
ABSTRACTConvergence problems often arise when complex linear mixed-effects models are fitted. Previous simulation studies (see, e.g. [Buyse M, Molenberghs G, Burzykowski T, Renard D, Geys H. The validation of surrogate endpoints in meta-analyses of randomized experiments. Biostatistics. 2000;1:49–67, Renard D, Geys H, Molenberghs G, Burzykowski T, Buyse M. Validation of surrogate endpoints in multiple randomized clinical trials with discrete outcomes. Biom J. 2002;44:921–935]) have shown that model convergence rates were higher (i) when the number of available clusters in the data increased, and (ii) when the size of the between-cluster variability increased (relative to the size of the residual variability). The aim of the present simulation study is to further extend these findings by examining the effect of an additional factor that is hypothesized to affect model convergence, i.e. imbalance in cluster size. The results showed that divergence rates were substantially higher for data sets with unbalanced cluster sizes – in particular when the model at hand had a complex hierarchical structure. Furthermore, the use of multiple imputation to restore ‘balance’ in unbalanced data sets reduces model convergence problems. 相似文献
903.
《Journal of Statistical Computation and Simulation》2012,82(4):103-117
A new method to calculate the multivariate t-distribution is introduced. We provide a series of substitutions, which transform the starting q-variate integral into one over the (q—1)-dimensional hypercube. In this situation standard numerical integration methods can be applied. Three algorithms are discussed in detail. As an application we derive an expression to calculate the power of multiple contrast tests assuming normally distributed data. 相似文献
904.
This paper describes a method for estimating the unknown parameters of an interdependent simultaneous equations model with latent variables. For each latent variable there may be single or multiple indicators. Estimation proceeds in three stages: first, estimates of the latent variables are constructed from the associated manifest indicators; second, treating the estimates as directly observed, fix-point estimates of the structural form parameters are obtained; third, the location parameters are estimated. The method involves only repeated application of ordinary least squares and no distributional assumptions are needed. The paper concludes with an empirical application of the method. 相似文献
905.
Bechhofer and Tamhane (1981) proposed a new class of incomplete block designs called BTIB designs for comparing p ≥ 2 test treatments with a control treatment in blocks of equal size k < p + 1. All BTIB designs for given (p,k) can be constructed by forming unions of replications of a set of elementary BTIB designs called generator designs for that (p,k). In general, there are many generator designs for given (p,k) but only a small subset (called the minimal complete set) of these suffices to obtain all admissible BTIB designs (except possibly any equivalent ones). Determination of the minimal complete set of generator designs for given (p,k) was stated as an open problem in Bechhofer and Tamhane (1981). In this paper we solve this problem for k = 3. More specifically, we give the minimal complete sets of generator designs for k = 3, p = 3(1)10; the relevant proofs are given only for the cases p = 3(1)6. Some additional combinatorial results concerning BTIB designs are also given. 相似文献
906.
Attention is focussed on a truncated version of the extended two-piece skew-normal distribution, studied by Arnold et al. [On multiple constraint skewed models. Statistics. 2009;3(3):279–293]. When the truncation point is set equal to zero, the resulting model can be viewed as a flexible extension of the half-normal model. Properties of the truncated distribution are investigated and corresponding likelihood inference is considered. The methodology is applied to data set involving non-negative observations and it is verified that the fit of the truncated model compares favourably with that of the half-normal. 相似文献
907.
多图模型表示来自于不同类的同一组随机变量间的相关关系,结点表示随机变量,边表示变量之间的直接联系,各类的图模型反映了各自相关结构特征和类间共同的信息。用多图模型联合估计方法,将来自不同个体的数据按其特征分类,假设每类中各变量间的相依结构服从同一个高斯图模型,应用组Lasso方法和图Lasso方法联合估计每类的图模型结构。数值模拟验证了多图模型联合估计方法的有效性。用多图模型和联合估计方法对中国15个省份13个宏观经济指标进行相依结构分析,结果表明,不同经济发展水平省份的宏观经济变量间存在共同的相关联系,反映了中国现阶段经济发展的特征;每一类的相关结构反映了各类省份经济发展独有的特征。 相似文献
908.
Andrew Atkinson Michael G. Kenward Tim Clayton James R. Carpenter 《Pharmaceutical statistics》2019,18(6):645-658
The analysis of time‐to‐event data typically makes the censoring at random assumption, ie, that—conditional on covariates in the model—the distribution of event times is the same, whether they are observed or unobserved (ie, right censored). When patients who remain in follow‐up stay on their assigned treatment, then analysis under this assumption broadly addresses the de jure, or “while on treatment strategy” estimand. In such cases, we may well wish to explore the robustness of our inference to more pragmatic, de facto or “treatment policy strategy,” assumptions about the behaviour of patients post‐censoring. This is particularly the case when censoring occurs because patients change, or revert, to the usual (ie, reference) standard of care. Recent work has shown how such questions can be addressed for trials with continuous outcome data and longitudinal follow‐up, using reference‐based multiple imputation. For example, patients in the active arm may have their missing data imputed assuming they reverted to the control (ie, reference) intervention on withdrawal. Reference‐based imputation has two advantages: (a) it avoids the user specifying numerous parameters describing the distribution of patients' postwithdrawal data and (b) it is, to a good approximation, information anchored, so that the proportion of information lost due to missing data under the primary analysis is held constant across the sensitivity analyses. In this article, we build on recent work in the survival context, proposing a class of reference‐based assumptions appropriate for time‐to‐event data. We report a simulation study exploring the extent to which the multiple imputation estimator (using Rubin's variance formula) is information anchored in this setting and then illustrate the approach by reanalysing data from a randomized trial, which compared medical therapy with angioplasty for patients presenting with angina. 相似文献
909.
Multiple comparison methods are widely implemented in statistical packages and heavily used. To obtain the critical value of a multiple comparison method for a given confidence level, a double integral equation must be solved. Current computer implementations evaluate one double integral for each candidate critical value using Gaussian quadrature. Consequently, iterative refinement of the critical value can slow the response time enough to hamper interactive data analysis. However, for balanced designs, to obtain the critical value for multiple comparisons with the best, subset selection, and one-sided multiple comparison with a control, if one regards the inner integral as a function of the outer integration variable, then this function can be obtained by discrete convolution using the Fast Fourier Transform (FFT). Exploiting the fact that this function need not be re-evaluated during iterative refinement of the critical value, it is shown that the FFT method obtains critical values at least four times as accurate and two to five times as fast as the Gaussian quadrature method. 相似文献
910.
J.F Lawless 《统计学通讯:理论与方法》2013,42(2):139-164
For the classical linear regression problem, a number of estimators alternative to least squares have been proposed for situations in which multicollinearity is a problem. There is, however, relatively little known about how these estimators behave in practice. This paper investigates mean square error properties for a number of biased regression estimators, and discusses some practical implications of the use of such estimators, A conclusion is that certain types of ridge estimatorsappear to have good mean square error properties, and this may be useful in situations in which mean square error is important 相似文献