全文获取类型
收费全文 | 3130篇 |
免费 | 65篇 |
国内免费 | 8篇 |
专业分类
管理学 | 241篇 |
民族学 | 3篇 |
人口学 | 16篇 |
丛书文集 | 58篇 |
理论方法论 | 15篇 |
综合类 | 422篇 |
社会学 | 17篇 |
统计学 | 2431篇 |
出版年
2023年 | 10篇 |
2022年 | 11篇 |
2021年 | 30篇 |
2020年 | 50篇 |
2019年 | 86篇 |
2018年 | 129篇 |
2017年 | 203篇 |
2016年 | 89篇 |
2015年 | 78篇 |
2014年 | 107篇 |
2013年 | 1008篇 |
2012年 | 223篇 |
2011年 | 101篇 |
2010年 | 95篇 |
2009年 | 101篇 |
2008年 | 75篇 |
2007年 | 83篇 |
2006年 | 85篇 |
2005年 | 73篇 |
2004年 | 74篇 |
2003年 | 66篇 |
2002年 | 53篇 |
2001年 | 61篇 |
2000年 | 31篇 |
1999年 | 41篇 |
1998年 | 44篇 |
1997年 | 34篇 |
1996年 | 18篇 |
1995年 | 14篇 |
1994年 | 12篇 |
1993年 | 7篇 |
1992年 | 7篇 |
1991年 | 9篇 |
1990年 | 10篇 |
1989年 | 7篇 |
1988年 | 13篇 |
1987年 | 4篇 |
1986年 | 4篇 |
1985年 | 12篇 |
1984年 | 6篇 |
1983年 | 15篇 |
1982年 | 4篇 |
1981年 | 6篇 |
1980年 | 4篇 |
1979年 | 2篇 |
1978年 | 2篇 |
1977年 | 2篇 |
1976年 | 1篇 |
1975年 | 2篇 |
1973年 | 1篇 |
排序方式: 共有3203条查询结果,搜索用时 15 毫秒
721.
We develop a variance reduction method for the seemingly unrelated (SUR) kernel estimator of Wang (2003). We show that the quadratic interpolation method introduced in Cheng et al. (2007) works for the SUR kernel estimator. For a given point of estimation, Cheng et al. (2007) define a variance reduced local linear estimate as a linear combination of classical estimates at three nearby points. We develop an analogous variance reduction method for SUR kernel estimators in clustered/longitudinal models and perform simulation studies which demonstrate the efficacy of our variance reduction method in finite sample settings. 相似文献
722.
Calibration techniques in survey sampling, such as generalized regression estimation (GREG), were formalized in the 1990s to produce efficient estimators of linear combinations of study variables, such as totals or means. They implicitly lie on the assumption of a linear regression model between the variable of interest and some auxiliary variables in order to yield estimates with lower variance if the model is true and remaining approximately design-unbiased even if the model does not hold. We propose a new class of model-assisted estimators obtained by releasing a few calibration constraints and replacing them with a penalty term. This penalization is added to the distance criterion to minimize. By introducing the concept of penalized calibration, combining usual calibration and this ‘relaxed’ calibration, we are able to adjust the weight given to the available auxiliary information. We obtain a more flexible estimation procedure giving better estimates particularly when the auxiliary information is overly abundant or not fully appropriate to be completely used. Such an approach can also be seen as a design-based alternative to the estimation procedures based on the more general class of mixed models, presenting new prospects in some scopes of application such as inference on small domains. 相似文献
723.
The necessary and sufficient conditions for the inadmissibility of the ridge regression is discussed under two different criteria, namely, average loss and Pitman nearness. Although the two criteria are very different, same conclusions are obtained. The loss functions considered in this article are th likelihood loss function and the Mahalanobis loss function. The two loss functions are motivated from the point of view of classification of two normal populations. Under the Mahalanobis loss it is demonstrated that the ridge regression is always inadmissible as long as the errors are assumed to be symmetrically distributed about the origin. 相似文献
724.
马琼 《榆林高等专科学校学报》2007,17(5):56-59,63
从产品、产品责任及抗辩,产品责任诉讼的举证制度和产品责任案件的赔偿问题入手,把美国、欧盟和我国法律相比较,阐述了我国在产品责任立法方面的缺陷与不足,对怎样完善我国产品责任立法提出了几点建议。 相似文献
725.
726.
Mette Langaas Bo Henry Lindqvist Egil Ferkingstad 《Journal of the Royal Statistical Society. Series B, Statistical methodology》2005,67(4):555-572
Summary. We consider the problem of estimating the proportion of true null hypotheses, π 0 , in a multiple-hypothesis set-up. The tests are based on observed p -values. We first review published estimators based on the estimator that was suggested by Schweder and Spjøtvoll. Then we derive new estimators based on nonparametric maximum likelihood estimation of the p -value density, restricting to decreasing and convex decreasing densities. The estimators of π 0 are all derived under the assumption of independent test statistics. Their performance under dependence is investigated in a simulation study. We find that the estimators are relatively robust with respect to the assumption of independence and work well also for test statistics with moderate dependence. 相似文献
727.
We implement profile empirical likelihood-based inference for censored median regression models. Inference for any specified subvector is carried out by profiling out the nuisance parameters from the “plug-in” empirical likelihood ratio function proposed by Qin and Tsao. To obtain the critical value of the profile empirical likelihood ratio statistic, we first investigate its asymptotic distribution. The limiting distribution is a sum of weighted chi square distributions. Unlike for the full empirical likelihood, however, the derived asymptotic distribution has intractable covariance structure. Therefore, we employ the bootstrap to obtain the critical value, and compare the resulting confidence intervals with the ones obtained through Basawa and Koul’s minimum dispersion statistic. Furthermore, we obtain confidence intervals for the age and treatment effects in a lung cancer data set. 相似文献
728.
As a flexible alternative to the Cox model, the accelerated failure time (AFT) model assumes that the event time of interest depends on the covariates through a regression function. The AFT model with non‐parametric covariate effects is investigated, when variable selection is desired along with estimation. Formulated in the framework of the smoothing spline analysis of variance model, the proposed method based on the Stute estimate ( Stute, 1993 [Consistent estimation under random censorship when covariables are present, J. Multivariate Anal. 45 , 89–103]) can achieve a sparse representation of the functional decomposition, by utilizing a reproducing kernel Hilbert norm penalty. Computational algorithms and theoretical properties of the proposed method are investigated. The finite sample size performance of the proposed approach is assessed via simulation studies. The primary biliary cirrhosis data is analyzed for demonstration. 相似文献
729.
Liqun Wang 《Revue canadienne de statistique》2007,35(2):233-248
Mixed effects models and Berkson measurement error models are widely used. They share features which the author uses to develop a unified estimation framework. He deals with models in which the random effects (or measurement errors) have a general parametric distribution, whereas the random regression coefficients (or unobserved predictor variables) and error terms have nonparametric distributions. He proposes a second-order least squares estimator and a simulation-based estimator based on the first two moments of the conditional response variable given the observed covariates. He shows that both estimators are consistent and asymptotically normally distributed under fairly general conditions. The author also reports Monte Carlo simulation studies showing that the proposed estimators perform satisfactorily for relatively small sample sizes. Compared to the likelihood approach, the proposed methods are computationally feasible and do not rely on the normality assumption for random effects or other variables in the model. 相似文献
730.