全文获取类型
收费全文 | 6084篇 |
免费 | 226篇 |
国内免费 | 113篇 |
专业分类
管理学 | 405篇 |
民族学 | 13篇 |
人才学 | 1篇 |
人口学 | 112篇 |
丛书文集 | 242篇 |
理论方法论 | 133篇 |
综合类 | 1560篇 |
社会学 | 327篇 |
统计学 | 3630篇 |
出版年
2024年 | 15篇 |
2023年 | 99篇 |
2022年 | 127篇 |
2021年 | 146篇 |
2020年 | 182篇 |
2019年 | 269篇 |
2018年 | 317篇 |
2017年 | 391篇 |
2016年 | 270篇 |
2015年 | 228篇 |
2014年 | 308篇 |
2013年 | 1069篇 |
2012年 | 404篇 |
2011年 | 260篇 |
2010年 | 220篇 |
2009年 | 220篇 |
2008年 | 233篇 |
2007年 | 235篇 |
2006年 | 209篇 |
2005年 | 214篇 |
2004年 | 186篇 |
2003年 | 139篇 |
2002年 | 101篇 |
2001年 | 117篇 |
2000年 | 97篇 |
1999年 | 67篇 |
1998年 | 65篇 |
1997年 | 46篇 |
1996年 | 29篇 |
1995年 | 32篇 |
1994年 | 25篇 |
1993年 | 16篇 |
1992年 | 20篇 |
1991年 | 12篇 |
1990年 | 11篇 |
1989年 | 5篇 |
1988年 | 8篇 |
1987年 | 7篇 |
1986年 | 6篇 |
1985年 | 6篇 |
1984年 | 6篇 |
1983年 | 5篇 |
1980年 | 1篇 |
排序方式: 共有6423条查询结果,搜索用时 15 毫秒
91.
In robust parameter design, variance effects and mean effects in a factorial experiment are modelled simultaneously. If variance effects are present in a model, correlations are induced among the naive estimators of the mean effects. A simple normal quantile plot of the mean effects may be misleading because the mean effects are no longer iid under the null hypothesis that they are zero. Adjusted quantiles are computed for the case when one variance effect is significant and examples of 8-run and 16-run fractional factorial designs are examined in detail. We find that the usual normal quantiles are similar to adjusted quantiles for all but the largest and smallest ordered effects for which they are conservative. Graphically, the qualitative difference between the two sets of quantiles is negligible (even in the presence of large variance effects) and we conclude that normal probability plots are robust in the presence of variance effects. 相似文献
92.
Andrada E. Ivanescu 《统计学通讯:模拟与计算》2013,42(9):2656-2669
ABSTRACTWe present methods for modeling and estimation of a concurrent functional regression when the predictors and responses are two-dimensional functional datasets. The implementations use spline basis functions and model fitting is based on smoothing penalties and mixed model estimation. The proposed methods are implemented in available statistical software, allow the construction of confidence intervals for the bivariate model parameters, and can be applied to completely or sparsely sampled responses. Methods are tested to data in simulations and they show favorable results in practice. The usefulness of the methods is illustrated in an application to environmental data. 相似文献
93.
AbstractImputation methods for missing data on a time-dependent variable within time-dependent Cox models are investigated in a simulation study. Quality of life (QoL) assessments were removed from the complete simulated datasets, which have a positive relationship between QoL and disease-free survival (DFS) and delayed chemotherapy and DFS, by missing at random and missing not at random (MNAR) mechanisms. Standard imputation methods were applied before analysis. Method performance was influenced by missing data mechanism, with one exception for simple imputation. The greatest bias occurred under MNAR and large effect sizes. It is important to carefully investigate the missing data mechanism. 相似文献
94.
A common statistical problem encountered in biomedical research is to test the hypothesis that the parameters of k binomial populations are all equal. An exact test of significance of this hypothesis is possible in principle, the appropriate null distribution being a normalized product of k binomial coefficients. However, the problem of computing the tail area of this distribution can be formidable since it requires the enumeration of all sets of k binomial coefficients whose product is less than a given constant. Existing algorithms, all of which rely on explicit enumeration to generate feasible binomial coefficients 相似文献
95.
Effects of censoring on the robustness of exponential-based confidence intervals for median lifetime
John D. Emerson 《统计学通讯:模拟与计算》2013,42(6):617-627
Statistical procedures for constructing confidence intervals for median lifetime often rest on a distributional assumption for failure times.This paper explores the interplay between censoring levels and robustness for two construction procedures based on exponential lifetime, subject to general right-censoring. Data are simulated from nearby Weibull distributions. As expected, the simulations indicate that when the exponential assumption is not satisfied, observed coverage by the confidence intervals may differ substantially from the specified coverage level. The marked improvement in the robustness properties of the intervals as the level of censoring increases suggests questions for future research. 相似文献
96.
An algorithm is presented for computing the finite population parameters and the approximate probability values associated with a recently-developed class of statistical inference techniques termed multi-response randomized block permutation procedures (MRBP). 相似文献
97.
An imputation procedure is a procedure by which each missing value in a data set is replaced (imputed) by an observed value using a predetermined resampling procedure. The distribution of a statistic computed from a data set consisting of observed and imputed values, called a completed data set, is affecwd by the imputation procedure used. In a Monte Carlo experiment, three imputation procedures are compared with respect to the empirical behavior of the goodness-of- fit chi-square statistic computed from a completed data set. The results show that each imputation procedure affects the distribution of the goodness-of-fit chi-square statistic in 3. different manner. However, when the empirical behavior of the goodness-of-fit chi-square statistic is compared u, its appropriate asymptotic distribution, there are no substantial differences between these imputation procedures. 相似文献
98.
The performance of the usual Shewhart control charts for monitoring process means and variation can be greatly affected by nonnormal data or subgroups that are correlated. Define the αk-risk for a Shewhart chart to be the probability that at least one “out-of-control” subgroup occurs in k subgroups when the control limits are calculated from the k subgroups. Simulation results show that the αk-risks can be quite large even for a process with normally distributed, independent subgroups. When the data are nonnormal, it is shown that the αk-risk increases dramatically. A method is also developed for simulating an “in-control” process with correlated subgroups from an autoregressive model. Simulations with this model indicate marked changes in the αk-risks for the Shewhart charts utilizing this type of correlated process data. Therefore, in practice a process should be investigated thoroughly regarding whether or not it is generating normal, independent data before out-of-control points on the control charts are interpreted to be due to some real assignable cause. 相似文献
99.
This paper is concerned with the estimation of a general class of nonlinear panel data models in which the conditional distribution of the dependent variable and the distribution of the heterogeneity factors are arbitrary. In general, exact analytical results for this problem do not exist. Here, Laplace and small-sigma appriximations for the marginal likelihood are presented. The computation of the MLE from both approximations is straightforward. It is shown that the accuracy of the Laplace approximation depends on both the sample size and the variance of the individual effects, whereas the accuracy of the small-sigma approximation is 0(1) with respect to the sample size. The results are applied to count, duration and probit panel data models. The accuracy of the approximations is evaluated through a Monte Carlo simulation experiment. The approximations are also applied in an analysis of youth unemployment in Australia. 相似文献
100.
In this article the problem of the optimal selection and allocation of time points in repeated measures experiments is considered. D‐ optimal designs for linear regression models with a random intercept and first order auto‐regressive serial correlations are computed numerically and compared with designs having equally spaced time points. When the order of the polynomial is known and the serial correlations are not too small, the comparison shows that for any fixed number of repeated measures, a design with equally spaced time points is almost as efficient as the D‐ optimal design. When, however, there is no prior knowledge about the order of the underlying polynomial, the best choice in terms of efficiency is a D‐ optimal design for the highest possible relevant order of the polynomial. A design with equally‐spaced time points is the second best choice 相似文献