全文获取类型
收费全文 | 36937篇 |
免费 | 954篇 |
国内免费 | 551篇 |
专业分类
管理学 | 2017篇 |
劳动科学 | 5篇 |
民族学 | 501篇 |
人才学 | 9篇 |
人口学 | 495篇 |
丛书文集 | 4928篇 |
理论方法论 | 1528篇 |
综合类 | 25139篇 |
社会学 | 2306篇 |
统计学 | 1514篇 |
出版年
2024年 | 56篇 |
2023年 | 250篇 |
2022年 | 339篇 |
2021年 | 373篇 |
2020年 | 536篇 |
2019年 | 558篇 |
2018年 | 557篇 |
2017年 | 614篇 |
2016年 | 640篇 |
2015年 | 838篇 |
2014年 | 1993篇 |
2013年 | 2372篇 |
2012年 | 2243篇 |
2011年 | 2662篇 |
2010年 | 2061篇 |
2009年 | 2113篇 |
2008年 | 2291篇 |
2007年 | 2694篇 |
2006年 | 2538篇 |
2005年 | 2378篇 |
2004年 | 2455篇 |
2003年 | 2401篇 |
2002年 | 1939篇 |
2001年 | 1557篇 |
2000年 | 801篇 |
1999年 | 303篇 |
1998年 | 153篇 |
1997年 | 107篇 |
1996年 | 101篇 |
1995年 | 72篇 |
1994年 | 58篇 |
1993年 | 51篇 |
1992年 | 45篇 |
1991年 | 37篇 |
1990年 | 34篇 |
1989年 | 32篇 |
1988年 | 21篇 |
1987年 | 14篇 |
1986年 | 6篇 |
1985年 | 25篇 |
1984年 | 27篇 |
1983年 | 22篇 |
1982年 | 13篇 |
1981年 | 21篇 |
1980年 | 12篇 |
1979年 | 12篇 |
1978年 | 10篇 |
1977年 | 4篇 |
1975年 | 3篇 |
排序方式: 共有10000条查询结果,搜索用时 0 毫秒
101.
M-estimation is a widely used technique for robust statistical inference. In this paper, we study model selection and model averaging for M-estimation to simultaneously improve the coverage probability of confidence intervals of the parameters of interest and reduce the impact of heavy-tailed errors or outliers in the response. Under general conditions, we develop robust versions of the focused information criterion and a frequentist model average estimator for M-estimation, and we examine their theoretical properties. In addition, we carry out extensive simulation studies as well as two real examples to assess the performance of our new procedure, and find that the proposed method produces satisfactory results. 相似文献
102.
We deal with experimental designs minimizing the mean square error of the linear BAYES estimator for the parameter vector of a multiple linear regression model where the experimental region is the k-dimensional unit sphere. After computing the uniquely determined optimum information matrix, we construct, separately for the homogeneous and the inhomogeneous model, both approximate and exact designs having such an information matrix. 相似文献
103.
Egmar Rödel 《Statistics》2013,47(4):573-585
Normed bivariate density funtions were introduced by HOEFFDING (1940/41). In the present paper estimators for normed bivariate ranks and on a FOURIER series expansion in LEGENDRE polynomials. The estimation of normed bivarate density functions under positive dependence is also described 相似文献
104.
Ronald D. Armstrong 《统计学通讯:模拟与计算》2013,42(7):1057-1073
This article develops a new cumulative sum statistic to identify aberrant behavior in a sequentially administered multiple-choice standardized examination. The examination responses can be described as finite Poisson trials, and the statistic can be used for other applications which fit this framework. The standardized examination setting uses a maximum likelihood estimate of examinee ability and an item response theory model. Aberrant and non aberrant probabilities are computed by an odds ratio analogous to risk adjusted CUSUM schemes. The significance level of a hypothesis test, where the null hypothesis is non-aberrant examinee behavior, is computed with Markov chains. A smoothing process is used to spread probabilities across the Markov states. The practicality of the approach to detect aberrant examinee behavior is demonstrated with results from both simulated and empirical data. 相似文献
105.
The cost and time consumption of many industrial experimentations can be reduced using the class of supersaturated designs since this can be used for screening out the important factors from a large set of potentially active variables. A supersaturated design is a design for which there are fewer runs than effects to be estimated. Although there exists a wide study of construction methods for supersaturated designs, their analysis methods are yet in an early research stage. In this article, we propose a method for analyzing data using a correlation-based measure, named as symmetrical uncertainty. This method combines measures from the information theory field and is used as the main idea of variable selection algorithms developed in data mining. In this work, the symmetrical uncertainty is used from another viewpoint in order to determine more directly the important factors. The specific method enables us to use supersaturated designs for analyzing data of generalized linear models for a Bernoulli response. We evaluate our method by using some of the existing supersaturated designs, obtained according to methods proposed by Tang and Wu (1997) as well as by Koukouvinos et al. (2008). The comparison is performed by some simulating experiments and the Type I and Type II error rates are calculated. Additionally, Receiver Operating Characteristics (ROC) curves methodology is applied as an additional statistical tool for performance evaluation. 相似文献
106.
We demonstrate a multidimensional approach for combining several indicators of well-being, including the traditional money-income indicators. This methodology avoids the difficult and much criticized task of computing imputed incomes for such indicators as net worth and schooling. Inequality in the proposed composite measures is computed using relative inequality indexes that permit simple analysis of both the contribution of each welfare indicator (and its factor components) and within and between components of total inequality when the population is grouped by income levels, age, gender, or any other criteria. The analysis is performed on U.S. data using the Michigan Survey of Income Dynamics. 相似文献
107.
We propose a semiparametric approach for the analysis of case–control genome-wide association study. Parametric components are used to model both the conditional distribution of the case status given the covariates and the distribution of genotype counts, whereas the distribution of the covariates are modelled nonparametrically. This yields a direct and joint modelling of the case status, covariates and genotype counts, and gives a better understanding of the disease mechanism and results in more reliable conclusions. Side information, such as the disease prevalence, can be conveniently incorporated into the model by an empirical likelihood approach and leads to more efficient estimates and a powerful test in the detection of disease-associated SNPs. Profiling is used to eliminate a nuisance nonparametric component, and the resulting profile empirical likelihood estimates are shown to be consistent and asymptotically normal. For the hypothesis test on disease association, we apply the approximate Bayes factor (ABF) which is computationally simple and most desirable in genome-wide association studies where hundreds of thousands to a million genetic markers are tested. We treat the approximate Bayes factor as a hybrid Bayes factor which replaces the full data by the maximum likelihood estimates of the parameters of interest in the full model and derive it under a general setting. The deviation from Hardy–Weinberg Equilibrium (HWE) is also taken into account and the ABF for HWE using cases is shown to provide evidence of association between a disease and a genetic marker. Simulation studies and an application are further provided to illustrate the utility of the proposed methodology. 相似文献
108.
Hadi Alizadeh Noughabi Naser Reza Arghami 《Journal of Statistical Computation and Simulation》2013,83(8):1556-1569
This paper introduces a general goodness-of-fit test based on the estimated Kullback–Leibler information. The test uses the Vasicek entropy estimate. Two special cases of the test for location–scale and shape families are discussed. The results are used to introduce goodness-of-fit tests for the uniform, Laplace, Weibull and beta distributions. The critical values and powers for some alternatives are obtained by simulation. 相似文献
109.
Thomas A. Louis 《The American statistician》2013,67(3)
The easily computed, one-sided confidence interval for the binomial parameter provides the basis for an interesting classroom example of scientific thinking and its relationship to confidence intervals. The upper limit can be represented as the sample proportion from a number of “successes” in a future experiment of the same sample size. The upper limit reported by most people corresponds closely to that producing a 95 percent classical confidence interval and has a Bayesian interpretation. 相似文献
110.
Arnold Zellner 《The American statistician》2013,67(4):278-280
In this article statistical inference is viewed as information processing involving input information and output information. After introducing information measures for the input and output information, an information criterion functional is formulated and optimized to obtain an optimal information processing rule (IPR). For the particular information measures and criterion functional adopted, it is shown that Bayes's theorem is the optimal IPR. This optimal IPR is shown to be 100% efficient in the sense that its use leads to the output information being exactly equal to the given input information. Also, the analysis links Bayes's theorem to maximum-entropy considerations. 相似文献