首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   12266篇
  免费   78篇
管理学   1747篇
民族学   118篇
人才学   1篇
人口学   2551篇
丛书文集   14篇
理论方法论   758篇
综合类   307篇
社会学   5388篇
统计学   1460篇
  2023年   26篇
  2022年   15篇
  2021年   33篇
  2020年   74篇
  2019年   95篇
  2018年   1722篇
  2017年   1735篇
  2016年   1170篇
  2015年   104篇
  2014年   105篇
  2013年   328篇
  2012年   420篇
  2011年   1216篇
  2010年   1099篇
  2009年   840篇
  2008年   863篇
  2007年   1046篇
  2006年   61篇
  2005年   274篇
  2004年   313篇
  2003年   255篇
  2002年   120篇
  2001年   27篇
  2000年   32篇
  1999年   29篇
  1998年   26篇
  1997年   19篇
  1996年   47篇
  1995年   18篇
  1994年   28篇
  1993年   21篇
  1992年   23篇
  1991年   13篇
  1990年   11篇
  1989年   11篇
  1988年   20篇
  1987年   5篇
  1986年   7篇
  1985年   12篇
  1984年   5篇
  1983年   7篇
  1982年   5篇
  1981年   5篇
  1980年   9篇
  1979年   9篇
  1978年   5篇
  1976年   7篇
  1975年   4篇
  1974年   4篇
  1971年   4篇
排序方式: 共有10000条查询结果,搜索用时 15 毫秒
991.
Many exploratory experiments such as DNA microarray or brain imaging require simultaneously comparisons of hundreds or thousands of hypotheses. Under such a setting, using the false discovery rate (FDR) as an overall Type I error is recommended (Benjamini and Hochberg in J. R. Stat. Soc. B 57:289–300, 1995). Many FDR controlling procedures have been proposed. However, when evaluating the performance of FDR-controlling procedures, researchers are often focused on the ability of procedures to control the FDR and to achieve high power. Meanwhile, under the multiple hypotheses, it may be also likely to commit a false non-discovery or fail to claim a true non-significance. In addition, various experimental parameters such as the number of hypotheses, the proportion of the number of true null hypotheses to the number of hypotheses, the samples size and the correlation structure may affect the performance of FDR controlling procedures. The purpose of this paper is to illustrate the performance of some existing FDR controlling procedures in terms of four indices, i.e., the FDR, the false non-discovery rate, the sensitivity and the specificity. Analytical results of these indices for the FDR controlling procedures are derived. Simulations are also performed to evaluate the performance of controlling procedures in terms of these indices under various experimental parameters. The result can be used to summarize as a guidance for practitioners to properly choose a FDR controlling procedure.  相似文献   
992.
We introduce a general Monte Carlo method based on Nested Sampling (NS), for sampling complex probability distributions and estimating the normalising constant. The method uses one or more particles, which explore a mixture of nested probability distributions, each successive distribution occupying ∼e −1 times the enclosed prior mass of the previous distribution. While NS technically requires independent generation of particles, Markov Chain Monte Carlo (MCMC) exploration fits naturally into this technique. We illustrate the new method on a test problem and find that it can achieve four times the accuracy of classic MCMC-based Nested Sampling, for the same computational effort; equivalent to a factor of 16 speedup. An additional benefit is that more samples and a more accurate evidence value can be obtained simply by continuing the run for longer, as in standard MCMC.  相似文献   
993.
Suppose [^(q)]{\widehat{\theta}} is an estimator of θ in \mathbbR{\mathbb{R}} that satisfies the central limit theorem. In general, inferences on θ are based on the central limit approximation. These have error O(n −1/2), where n is the sample size. Many unsuccessful attempts have been made at finding transformations which reduce this error to O(n −1). The variance stabilizing transformation fails to achieve this. We give alternative transformations that have bias O(n −2), and skewness O(n −3). Examples include the binomial, Poisson, chi-square and hypergeometric distributions.  相似文献   
994.
Two-sample comparison problems are often encountered in practical projects and have widely been studied in literature. Owing to practical demands, the research for this topic under special settings such as a semiparametric framework have also attracted great attentions. Zhou and Liang (Biometrika 92:271–282, 2005) proposed an empirical likelihood-based semi-parametric inference for the comparison of treatment effects in a two-sample problem with censored data. However, their approach is actually a pseudo-empirical likelihood and the method may not be fully efficient. In this study, we develop a new empirical likelihood-based inference under more general framework by using the hazard formulation of censored data for two sample semi-parametric hybrid models. We demonstrate that our empirical likelihood statistic converges to a standard chi-squared distribution under the null hypothesis. We further illustrate the use of the proposed test by testing the ROC curve with censored data, among others. Numerical performance of the proposed method is also examined.  相似文献   
995.
We construct and investigate robust nonparametric tests for the two-sample location problem. A test based on a suitable scaling of the median of the set of differences between the two samples, which is the Hodges-Lehmann shift estimator corresponding to the Wilcoxon two-sample rank test, leads to higher robustness against outliers than the Wilcoxon test itself, while preserving its efficiency under a broad range of distributions. The good performance of the constructed test is investigated under different distributions and outlier configurations and compared to alternatives like the two-sample t-, the Wilcoxon and the median test, as well as to tests based on the difference of the sample medians or the one-sample Hodges-Lehmann estimators.  相似文献   
996.
The prevalence of interval censored data is increasing in medical studies due to the growing use of biomarkers to define a disease progression endpoint. Interval censoring results from periodic monitoring of the progression status. For example, disease progression is established in the interval between the clinic visit where progression is recorded and the prior clinic visit where there was no evidence of disease progression. A methodology is proposed for estimation and inference on the regression coefficients in the Cox proportional hazards model with interval censored data. The methodology is based on estimating equations and uses an inverse probability weight to select event time pairs where the ordering is unambiguous. Simulations are performed to examine the finite sample properties of the estimate and a colon cancer data set is used to demonstrate its performance relative to the conventional partial likelihood estimate that ignores the interval censoring.  相似文献   
997.
In many randomized clinical trials, the primary response variable, for example, the survival time, is not observed directly after the patients enroll in the study but rather observed after some period of time (lag time). It is often the case that such a response variable is missing for some patients due to censoring that occurs when the study ends before the patient’s response is observed or when the patients drop out of the study. It is often assumed that censoring occurs at random which is referred to as noninformative censoring; however, in many cases such an assumption may not be reasonable. If the missing data are not analyzed properly, the estimator or test for the treatment effect may be biased. In this paper, we use semiparametric theory to derive a class of consistent and asymptotically normal estimators for the treatment effect parameter which are applicable when the response variable is right censored. The baseline auxiliary covariates and post-treatment auxiliary covariates, which may be time-dependent, are also considered in our semiparametric model. These auxiliary covariates are used to derive estimators that both account for informative censoring and are more efficient then the estimators which do not consider the auxiliary covariates.  相似文献   
998.
In sample surveys and many other areas of application, the ratio of variables is often of great importance. This often occurs when one variable is available at the population level while another variable of interest is available for sample data only. In this case, using the sample ratio, we can often gather valuable information on the variable of interest for the unsampled observations. In many other studies, the ratio itself is of interest, for example when estimating proportions from a random number of observations. In this note we compare three confidence intervals for the population ratio: A large sample interval, a log based version of the large sample interval, and Fieller’s interval. This is done through data analysis and through a small simulation experiment. The Fieller method has often been proposed as a superior interval for small sample sizes. We show through a data example and simulation experiments that Fieller’s method often gives nonsensical and uninformative intervals when the observations are noisy relative to the mean of the data. The large sample interval does not similarly suffer and thus can be a more reliable method for small and large samples.  相似文献   
999.
A prevalence of heavy-tailed, peaked and skewed uncertainty phenomena have been cited in literature dealing with economic, physics, and engineering data. This fact has invigorated the search for continuous distributions of this nature. In this paper we shall generalize the two-sided framework presented in Kotz and van Dorp (Beyond beta: other continuous families of distributions with bounded support and applications. World Scientific Press, Singapore, 2004) for the construction of families of distributions with bounded support via a mixture technique utilizing two generating densities instead of one. The family of Elevated Two-Sided Power (ETSP) distributions is studied as an instance of this generalized framework. Through a moment ratio diagram comparison, we demonstrate that the ETSP family allows for a remarkable flexibility when modeling heavy-tailed and peaked, but skewed, uncertainty phenomena. We shall demonstrate its applicability via an illustrative example utilizing 2008 US income data.  相似文献   
1000.
This paper extends the scedasticity comparison among several groups of observations, usually complying with the homoscedastic and the heteroscedastic cases, in order to deal with data sets laying in an intermediate situation. As is well known, homoscedasticity corresponds to equality in orientation, shape and size of the group scatters. Here our attention is focused on two weaker requirements: scatters with the same orientation, but with different shape and size, or scatters with the same shape and size but different orientation. We introduce a multiple testing procedure that takes into account each of the above conditions. This approach discloses a richer information on the data underlying structure than the classical method only based on homo/heteroscedasticity. At the same time, it allows a more parsimonious parametrization, whenever the patterned model is appropriate to describe the real data. The new inferential methodology is then applied to some well-known data sets, chosen in the multivariate literature, to show the real gain in using this more informative approach. Finally, a wide simulation study illustrates and compares the performance of the proposal using data sets with gradual departure from homoscedasticity.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号