首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   1507篇
  免费   46篇
  国内免费   2篇
管理学   65篇
民族学   5篇
人口学   43篇
丛书文集   59篇
理论方法论   43篇
综合类   300篇
社会学   75篇
统计学   965篇
  2023年   5篇
  2022年   6篇
  2021年   16篇
  2020年   35篇
  2019年   59篇
  2018年   49篇
  2017年   83篇
  2016年   31篇
  2015年   39篇
  2014年   59篇
  2013年   337篇
  2012年   180篇
  2011年   75篇
  2010年   52篇
  2009年   42篇
  2008年   64篇
  2007年   50篇
  2006年   47篇
  2005年   57篇
  2004年   39篇
  2003年   41篇
  2002年   30篇
  2001年   28篇
  2000年   24篇
  1999年   10篇
  1998年   10篇
  1997年   7篇
  1996年   8篇
  1995年   6篇
  1994年   23篇
  1993年   2篇
  1992年   6篇
  1991年   3篇
  1990年   3篇
  1989年   1篇
  1988年   3篇
  1986年   2篇
  1985年   4篇
  1984年   5篇
  1983年   3篇
  1982年   3篇
  1981年   4篇
  1980年   2篇
  1978年   1篇
  1977年   1篇
排序方式: 共有1555条查询结果,搜索用时 156 毫秒
991.
The Bayesian estimation for the parameters of the finite mixture of the Burr type XII distribution with its reciprocal are obtained based on the type-I censored data. The Bayes estimators are computed based on squared error and Linex loss functions and using the idea of Markov chain Monte Carlo algorithm. Based on the Monte Carlo simulation, Bayes estimators are compared with their corresponding maximum-likelihood estimators.  相似文献   
992.
ABSTRACT

When the editors of Basic and Applied Social Psychology effectively banned the use of null hypothesis significance testing (NHST) from articles published in their journal, it set off a fire-storm of discussions both supporting the decision and defending the utility of NHST in scientific research. At the heart of NHST is the p-value which is the probability of obtaining an effect equal to or more extreme than the one observed in the sample data, given the null hypothesis and other model assumptions. Although this is conceptually different from the probability of the null hypothesis being true, given the sample, p-values nonetheless can provide evidential information, toward making an inference about a parameter. Applying a 10,000-case simulation described in this article, the authors found that p-values’ inferential signals to either reject or not reject a null hypothesis about the mean (α?=?0.05) were consistent for almost 70% of the cases with the parameter’s true location for the sampled-from population. Success increases if a hybrid decision criterion, minimum effect size plus p-value (MESP), is used. Here, rejecting the null also requires the difference of the observed statistic from the exact null to be meaningfully large or practically significant, in the researcher’s judgment and experience. The simulation compares performances of several methods: from p-value and/or effect size-based, to confidence-interval based, under various conditions of true location of the mean, test power, and comparative sizes of the meaningful distance and population variability. For any inference procedure that outputs a binary indicator, like flagging whether a p-value is significant, the output of one single experiment is not sufficient evidence for a definitive conclusion. Yet, if a tool like MESP generates a relatively reliable signal and is used knowledgeably as part of a research process, it can provide useful information.  相似文献   
993.
This paper examines the factors determining variations in spatial rates of overeducation. A quantile regression model has been implemented on a sample of region-yearly data drawn from the EU Survey on Income and Living Conditions (EU-SILC) and several institutional and macroeconomic features captured from other data-sets. Potential determinants of overeducation rates include factors such as labour market risk, financial aid to university students, excess labour demand and institutional factors. We find significant effects both for labour market structural imbalances and institutional factors. The research supports the findings of micro based studies which have found that overeducation is consistent with an assignment interpretation of the labour market.  相似文献   
994.
This article introduces a novel non parametric penalized likelihood hazard estimation when the censoring time is dependent on the failure time for each subject under observation. More specifically, we model this dependence using a copula, and the method of maximum penalized likelihood (MPL) is adopted to estimate the hazard function. We do not consider covariates in this article. The non negatively constrained MPL hazard estimation is obtained using a multiplicative iterative algorithm. The consistency results and the asymptotic properties of the proposed hazard estimator are derived. The simulation studies show that our MPL estimator under dependent censoring with an assumed copula model provides a better accuracy than the MPL estimator under independent censoring if the sign of dependence is correctly specified in the copula function. The proposed method is applied to a real dataset, with a sensitivity analysis performed over various values of correlation between failure and censoring times.  相似文献   
995.
Inverse Gaussian distribution has been used widely as a model in analysing lifetime data. In this regard, estimation of parameters of two-parameter (IG2) and three-parameter inverse Gaussian (IG3) distributions based on complete and censored samples has been discussed in the literature. In this paper, we develop estimation methods based on progressively Type-II censored samples from IG3 distribution. In particular, we use the EM-algorithm, as well as some other numerical methods for determining the maximum-likelihood estimates (MLEs) of the parameters. The asymptotic variances and covariances of the MLEs from the EM-algorithm are derived by using the missing information principle. We also consider some simplified alternative estimators. The inferential methods developed are then illustrated with some numerical examples. We also discuss the interval estimation of the parameters based on the large-sample theory and examine the true coverage probabilities of these confidence intervals in case of small samples by means of Monte Carlo simulations.  相似文献   
996.
Most multivariate statistical techniques rely on the assumption of multivariate normality. The effects of nonnormality on multivariate tests are assumed to be negligible when variance–covariance matrices and sample sizes are equal. Therefore, in practice, investigators usually do not attempt to assess multivariate normality. In this simulation study, the effects of skewed and leptokurtic multivariate data on the Type I error and power of Hotelling's T 2 were examined by manipulating distribution, sample size, and variance–covariance matrix. The empirical Type I error rate and power of Hotelling's T 2 were calculated before and after the application of generalized Box–Cox transformation. The findings demonstrated that even when variance–covariance matrices and sample sizes are equal, small to moderate changes in power still can be observed.  相似文献   
997.
The independence assumption in statistical significance testing becomes increasingly crucial and unforgiving as sample size increases. Seemingly, inconsequential violations of this assumption can substantially increase the probability of a Type I error if sample sizes are large. In the case of Student's t test, it is found that correlations within samples in a range from 0.01 to 0.05 can lead to rejection of a true null hypothesis with high probability, if N is 50, 100 or larger.  相似文献   
998.
In this article, an extensive Monte Carlo simulation study is conducted to evaluate and compare nonparametric multiple comparison tests under violations of classical analysis of variance assumptions. Simulation space of the Monte Carlo study is composed of 288 different combinations of balanced and unbalanced sample sizes, number of groups, treatment effects, various levels of heterogeneity of variances, dependence between subgroup levels, and skewed error distributions under the single factor experimental design. By this large simulation space, we present a detailed analysis of effects of the violations of assumptions on the performance of nonparametric multiple comparison tests in terms of three error and four power measures. Observations of this study are beneficial to decide the optimal nonparametric test according to requirements and conditions of undertaken experiments. When some of the assumptions of analysis of variance are violated and number of groups is small, use of stepwise Steel-Dwass procedure with Holm's approach is appropriate to control type I error at a desired level. Dunn's method should be employed for greater number of groups. When subgroups are unbalanced and number of groups is small, Nemenyi's procedure with Duncan's approach produces high power values. Conover's procedure successfully provides high power values with a small number of unbalanced groups or with a greater number of balanced or unbalanced groups. At the same time, Conover's procedure is unable to control type I error rates.  相似文献   
999.
Ákos Farkas 《Slavonica》2017,22(1-2):39-53
Hungary's leading women writers from Margit Kaffka to Piroska Szenes took passionate sides when compelled to choose between the ‘maternal’ protection of life and the patriot's loyalty to ‘fatherland’. ‘If only?…?this whole wartime world was turned upside down!’ and ‘No peace on earth before the last bit of soil?…?is regained!’ were characteristic wishes made by women writers on either side of the pacifist versus patriot divide. The article seeks to answer the question what, beyond background and temperament, motivated the emblematic female figures of Hungary's interwar literature coming out in favour of peace, country, or both.  相似文献   
1000.
We point out and comment on the confusions, deficiencies and errors of Wang [Life prediction under random censorship, J. Stat. Comput. Simul. 78 (2008), pp. 1033–1044].  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号