全文获取类型
收费全文 | 5234篇 |
免费 | 127篇 |
国内免费 | 46篇 |
专业分类
管理学 | 336篇 |
劳动科学 | 2篇 |
民族学 | 21篇 |
人口学 | 196篇 |
丛书文集 | 208篇 |
理论方法论 | 120篇 |
综合类 | 2023篇 |
社会学 | 84篇 |
统计学 | 2417篇 |
出版年
2024年 | 4篇 |
2023年 | 20篇 |
2022年 | 33篇 |
2021年 | 44篇 |
2020年 | 100篇 |
2019年 | 110篇 |
2018年 | 159篇 |
2017年 | 232篇 |
2016年 | 130篇 |
2015年 | 114篇 |
2014年 | 227篇 |
2013年 | 1005篇 |
2012年 | 368篇 |
2011年 | 250篇 |
2010年 | 241篇 |
2009年 | 222篇 |
2008年 | 258篇 |
2007年 | 270篇 |
2006年 | 250篇 |
2005年 | 243篇 |
2004年 | 184篇 |
2003年 | 174篇 |
2002年 | 147篇 |
2001年 | 128篇 |
2000年 | 96篇 |
1999年 | 70篇 |
1998年 | 45篇 |
1997年 | 54篇 |
1996年 | 44篇 |
1995年 | 41篇 |
1994年 | 19篇 |
1993年 | 24篇 |
1992年 | 18篇 |
1991年 | 18篇 |
1990年 | 12篇 |
1989年 | 10篇 |
1988年 | 11篇 |
1987年 | 3篇 |
1986年 | 4篇 |
1985年 | 4篇 |
1984年 | 4篇 |
1983年 | 2篇 |
1982年 | 5篇 |
1981年 | 1篇 |
1980年 | 3篇 |
1979年 | 4篇 |
1978年 | 2篇 |
排序方式: 共有5407条查询结果,搜索用时 15 毫秒
61.
Some statistics practitioners often ignore the underlying assumptions when analyzing a real data and employ the Nonlinear Least Squares (NLLS) method to estimate the parameters of a nonlinear model. In order to make reliable inferences about the parameters of a model, require that the underlying assumptions, especially the assumption that the errors are independent, are satisfied. However, in a real situation, we may encounter dependent error terms which prone to produce autocorrelated errors. A two-stage estimator (CTS) has been developed to remedy this problem. Nevertheless, it is now evident that the presence of outliers have an unduly effect on the least squares estimates. We expect that the CTS is also easily affected by outliers since it is based on the least squares estimator, which is not robust. In this article, we propose a Robust Two-Stage (RTS) procedure for the estimation of the nonlinear regression parameters in the situation where autocorrelated errors come together with the existence of outliers. The numerical example and simulation study signify that the RTS is more efficient than the NLLS and the CTS methods. 相似文献
62.
Dinghai Xu 《统计学通讯:模拟与计算》2013,42(7):1403-1421
This article investigates an efficient estimation method for a class of switching regressions based on the characteristic function (CF). We show that with the exponential weighting function, the CF-based estimator can be achieved from minimizing a closed form distance measure. Due to the availability of the analytical structure of the asymptotic covariance, an iterative estimation procedure is developed involving the minimization of a precision measure of the asymptotic covariance matrix. Numerical examples are illustrated via a set of Monte Carlo experiments examining the implementation, finite sample property and the efficiency of the proposed estimator. 相似文献
63.
We consider a number of estimators of regression coefficients, all of generalized ridge, or 'shrinkage' type. Results of a simulation study indicate that with respect to two commonly used mean square error criteria, two ordinary ridge estimators, one proposed by Hoerl, Kennard and Baldwin, and the other introduced here, perform substantially better than both least squares and the other estimators discussed here 相似文献
64.
Sean Collins 《商业与经济统计学杂志》2013,31(3):267-277
This article reviews several techniques useful for forming point and interval predictions in regression models with Box-Cox transformed variables. The techniques reviewed—plug-in, mean squared error analysis, predictive likelihood, and stochastic simulation—take account of nonnormality and parameter uncertainty in varying degrees. A Monte Carlo study examining their small-sample accuracy indicates that uncertainty about the Box–Cox transformation parameter may be relatively unimportant. For certain parameters, deterministic point predictions are biased, and plug-in prediction intervals are also biased. Stochastic simulation, as usually carried out, leads to badly biased predictions. A modification of the usual approach renders stochastic simulation predictions largely unbiased. 相似文献
65.
S. Kalke 《Journal of Statistical Computation and Simulation》2013,83(4):641-667
In this paper, we introduce the p-generalized polar methods for the simulation of the p-generalized Gaussian distribution. On the basis of geometric measure representations, the well-known Box–Muller method and the Marsaglia–Bray rejecting polar method for the simulation of the Gaussian distribution are generalized to simulate the p-generalized Gaussian distribution, which fits much more flexibly to data than the Gaussian distribution and has already been applied in various fields of modern sciences. To prove the correctness of the p-generalized polar methods, we give stochastic representations, and to demonstrate their adequacy, we perform a comparison of six simulation techniques w.r.t. the goodness of fit and the complexity. The competing methods include adapted general methods and another special method. Furthermore, we prove stochastic representations for all the adapted methods. 相似文献
66.
A Monte Carlo simulation was conducted to compare the type I error rate and test power of the analysis of means (ANOM) test to the one-way analysis of variance F-test (ANOVA-F). Simulation results showed that as long as the homogeneity of the variance assumption was satisfied, regardless of the shape of the distribution, number of group and the combination of observations, both ANOVA-F and ANOM test have displayed similar type I error rates. However, both tests have been negatively affected from the heterogeneity of the variances. This case became more obvious when the variance ratios increased. The test power values of both tests changed with respect to the effect size (Δ), variance ratio and sample size combinations. As long as the variances are homogeneous, ANOVA-F and ANOM test have similar powers except unbalanced cases. Under unbalanced conditions, the ANOVA-F was observed to be powerful than the ANOM-test. On the other hand, an increase in total number of observations caused the power values of ANOVA-F and ANOM test approach to each other. The relations between effect size (Δ) and the variance ratios affected the test power, especially when the sample sizes are not equal. As ANOVA-F has become to be superior in some of the experimental conditions being considered, ANOM is superior in the others. However, generally, when the populations with large mean have larger variances as well, ANOM test has been seen to be superior. On the other hand, when the populations with large mean have small variances, generally, ANOVA-F has observed to be superior. The situation became clearer when the number of the groups is 4 or 5. 相似文献
67.
Ghazi Shukur 《统计学通讯:模拟与计算》2013,42(2):419-448
Using Monte Carlo methods, the properties of systemwise generalisations of the Breusch-Godfrey test for autocorrelated errors are studied in situations when the error terms follow either normal or non-normal distributions, and when these errors follow either AR(1) or MA(1) processes. Edgerton and Shukur (1999) studied the properties of the test using normally distributed error terms and when these errors follow an AR(1) process. When the errors follow a non-normal distribution, the performances of the tests deteriorate especially when the tails are very heavy. The performances of the tests become better (as in the case when the errors are generated by the normal distribution) when the errors are less heavy tailed. 相似文献
68.
In this paper we consider the issue of constructing retrospective T 2 control chart limits so as to control the overall probability of a false alarm at a specified value. We describe an exact method for constructing the control limits for retrospective examination. We then consider Bonferroni-adjustments to Alt's control limit and to the standard x 2 control limit as alternatives to the exact limit since it is computationally cumbersome to find the exact limit. We present the results of some simulation experiments that are carried out to compare the performance of these control limits. The results indicate that the Bonferroni-adjusted Alt's control limit performs better that the Bonferroni-adjusted x 2 control limit. Furthermore, it appears that the Bonferroni-adjusted Alt's control limit is more than adequate for controlling the overall false alarm probability at a specified value. 相似文献
69.
Selecting predictors to optimize the outcome prediction is an important statistical method. However, it usually ignores the false positives in the selected predictors. In this article, we advocate a conventional stepwise forward variable selection method based on the predicted residual sum of squares, and develop a positive false discovery rate (pFDR) estimate for the selected predictor subset, and a local pFDR estimate to prioritize the selected predictors. This pFDR estimate takes account of the existence of non null predictors, and is proved to be asymptotically conservative. In addition, we propose two views of a variable selection process: an overall and an individual test. An interesting feature of the overall test is that its power of selecting non null predictors increases with the proportion of non null predictors among all candidate predictors. Data analysis is illustrated with an example, in which genetic and clinical predictors were selected to predict the cholesterol level change after four months of tamoxifen treatment, and pFDR was estimated. Our method's performance is evaluated through statistical simulations. 相似文献
70.
When incomplete repeated failure times are collected from a large number of independent individuals, interest is focused primarily on the consistent and efficient estimation of the effects of the associated covariates on the failure times. Since repeated failure times are likely to be correlated, it is important to exploit the correlation structure of the failure data in order to obtain such consistent and efficient estimates. However, it may be difficult to specify an appropriate correlation structure for a real life data set. We propose a robust correlation structure that can be used irrespective of the true correlation structure. This structure is used in constructing an estimating equation for the hazard ratio parameter, under the assumption that the number of repeated failure times for an individual is random. The consistency and efficiency of the estimates is examined through a simulation study, where we consider failure times that marginally follow an exponential distribution and a Poisson distribution is assumed for the random number of repeated failure times. We conclude by using the proposed method to analyze a bladder cancer dataset. 相似文献