全文获取类型
收费全文 | 6665篇 |
免费 | 39篇 |
国内免费 | 2篇 |
专业分类
管理学 | 1086篇 |
民族学 | 54篇 |
人才学 | 9篇 |
人口学 | 500篇 |
丛书文集 | 47篇 |
理论方法论 | 781篇 |
综合类 | 51篇 |
社会学 | 3308篇 |
统计学 | 870篇 |
出版年
2024年 | 45篇 |
2023年 | 54篇 |
2022年 | 29篇 |
2021年 | 51篇 |
2020年 | 139篇 |
2019年 | 185篇 |
2018年 | 198篇 |
2017年 | 203篇 |
2016年 | 208篇 |
2015年 | 144篇 |
2014年 | 174篇 |
2013年 | 1007篇 |
2012年 | 234篇 |
2011年 | 247篇 |
2010年 | 195篇 |
2009年 | 165篇 |
2008年 | 193篇 |
2007年 | 218篇 |
2006年 | 200篇 |
2005年 | 226篇 |
2004年 | 199篇 |
2003年 | 171篇 |
2002年 | 169篇 |
2001年 | 114篇 |
2000年 | 151篇 |
1999年 | 124篇 |
1998年 | 111篇 |
1997年 | 104篇 |
1996年 | 94篇 |
1995年 | 87篇 |
1994年 | 107篇 |
1993年 | 93篇 |
1992年 | 98篇 |
1991年 | 64篇 |
1990年 | 56篇 |
1989年 | 57篇 |
1988年 | 68篇 |
1987年 | 52篇 |
1986年 | 48篇 |
1985年 | 57篇 |
1984年 | 68篇 |
1983年 | 55篇 |
1982年 | 58篇 |
1981年 | 50篇 |
1980年 | 51篇 |
1979年 | 46篇 |
1978年 | 32篇 |
1977年 | 31篇 |
1976年 | 46篇 |
1974年 | 35篇 |
排序方式: 共有6706条查询结果,搜索用时 15 毫秒
51.
The Lomax (Pareto II) distribution has found wide application in a variety of fields. We analyze the second-order bias of the maximum likelihood estimators of its parameters for finite sample sizes, and show that this bias is positive. We derive an analytic bias correction which reduces the percentage bias of these estimators by one or two orders of magnitude, while simultaneously reducing relative mean squared error. Our simulations show that this performance is very similar to that of a parametric bootstrap correction based on a linear bias function. Three examples with actual data illustrate the application of our bias correction. 相似文献
52.
Experiments in which very few units are measured many times sometimes present particular difficulties. Interest often centers on simple location shifts between two treatment groups, but appropriate modeling of the error distribution can be challenging. For example, normality may be difficult to verify, or a single transformation stabilizing variance or improving normality for all units and all measurements may not exist. We propose an analysis of two sample repeated measures data based on the permutation distribution of units. This provides a distribution free alternative to standard analyses. The analysis includes testing, estimation and confidence intervals. By assuming a certain structure in the location shift model, the dimension of the problem is reduced by analyzing linear combinations of the marginal statistics. Recently proposed algorithms for computation of two sample permutation distributions, require only a few seconds for experiments having as many as 100 units and any number of repeated measures. The test has high asymptotic efficiency and good power with respect to tests based on the normal distribution. Since the computational burden is minimal, approximation of the permutation distribution is unnecessary. 相似文献
53.
Iliyan Georgiev David I. Harvey Stephen J. Leybourne A. M. Robert Taylor 《商业与经济统计学杂志》2013,31(3):528-541
In order for predictive regression tests to deliver asymptotically valid inference, account has to be taken of the degree of persistence of the predictors under test. There is also a maintained assumption that any predictability in the variable of interest is purely attributable to the predictors under test. Violation of this assumption by the omission of relevant persistent predictors renders the predictive regression invalid, and potentially also spurious, as both the finite sample and asymptotic size of the predictability tests can be significantly inflated. In response, we propose a predictive regression invalidity test based on a stationarity testing approach. To allow for an unknown degree of persistence in the putative predictors, and for heteroscedasticity in the data, we implement our proposed test using a fixed regressor wild bootstrap procedure. We demonstrate the asymptotic validity of the proposed bootstrap test by proving that the limit distribution of the bootstrap statistic, conditional on the data, is the same as the limit null distribution of the statistic computed on the original data, conditional on the predictor. This corrects a long-standing error in the bootstrap literature whereby it is incorrectly argued that for strongly persistent regressors and test statistics akin to ours the validity of the fixed regressor bootstrap obtains through equivalence to an unconditional limit distribution. Our bootstrap results are therefore of interest in their own right and are likely to have applications beyond the present context. An illustration is given by reexamining the results relating to U.S. stock returns data in Campbell and Yogo (2006). Supplementary materials for this article are available online. 相似文献
54.
The empirical likelihood (EL) technique has been well addressed in both the theoretical and applied literature in the context of powerful nonparametric statistical methods for testing and interval estimations. A nonparametric version of Wilks theorem (Wilks, 1938) can usually provide an asymptotic evaluation of the Type I error of EL ratio-type tests. In this article, we examine the performance of this asymptotic result when the EL is based on finite samples that are from various distributions. In the context of the Type I error control, we show that the classical EL procedure and the Student's t-test have asymptotically a similar structure. Thus, we conclude that modifications of t-type tests can be adopted to improve the EL ratio test. We propose the application of the Chen (1995) t-test modification to the EL ratio test. We display that the Chen approach leads to a location change of observed data whereas the classical Bartlett method is known to be a scale correction of the data distribution. Finally, we modify the EL ratio test via both the Chen and Bartlett corrections. We support our argument with theoretical proofs as well as a Monte Carlo study. A real data example studies the proposed approach in practice. 相似文献
55.
While Markov chain Monte Carlo (MCMC) methods are frequently used for difficult calculations in a wide range of scientific disciplines, they suffer from a serious limitation: their samples are not independent and identically distributed. Consequently, estimates of expectations are biased if the initial value of the chain is not drawn from the target distribution. Regenerative simulation provides an elegant solution to this problem. In this article, we propose a simple regenerative MCMC algorithm to generate variates for any distribution. 相似文献
56.
Göran Kauermann Christian Schellhase David Ruppert 《Scandinavian Journal of Statistics》2013,40(4):685-705
The paper introduces a new method for flexible spline fitting for copula density estimation. Spline coefficients are penalized to achieve a smooth fit. To weaken the curse of dimensionality, instead of a full tensor spline basis, a reduced tensor product based on so called sparse grids (Notes Numer. Fluid Mech. Multidiscip. Des., 31, 1991, 241‐251) is used. To achieve uniform margins of the copula density, linear constraints are placed on the spline coefficients, and quadratic programming is used to fit the model. Simulations and practical examples accompany the presentation. 相似文献
57.
Scan statistics are used in spatial statistics and image analysis to detect regions of unusual or anomalous activity. A scan statistic is a maximum (or minimum) of a local statistic—one computed on a local region of the data. This is sometimes called ‘moving window analysis’; in the Engineering literature. The idea is to ‘slide’ a window around the image (or map or whatever spatial structure the data have), compute a statistic within each window, and look for outliers—anomalously high (or low) statistics. We discuss extending this idea to graphs, in which case the local region is defined in terms of the connectivity of the graph—the neighborhoods of vertices. WIREs Comput Stat 2012 doi: 10.1002/wics.1217 This article is categorized under:
- Data: Types and Structure > Graph and Network Data
- Data: Types and Structure > Social Networks
58.
Jian Zou Alan F. Karr David Banks Matthew J. Heaton Gauri Datta James Lynch Francisco Vera 《Statistical Analysis and Data Mining》2012,5(3):194-204
Early and accurate detection of outbreaks is one of the most important objectives of syndromic surveillance systems. We propose a general Bayesian framework for syndromic surveillance systems. The methodology incorporates Gaussian Markov random field (GMRF) and spatio‐temporal conditional autoregressive (CAR) modeling. By contrast, most previous approaches have been based on only spatial or time series models. The model has appealing probabilistic representations as well as attractive statistical properties. Based on extensive simulation studies, the model is capable of capturing outbreaks rapidly, while still limiting false positives. © 2012 Wiley Periodicals, Inc. Statistical Analysis and Data Mining 5: 194–204, 2012 相似文献
59.
Mehmet Sahinoglu Luis Cueva‐Parra David Ang 《Wiley Interdisciplinary Reviews: Computational Statistics》2012,4(3):227-248
Risk analysis, comprising risk assessment and risk management stages, is one of the most popular and challenging topics of our times because security and privacy, and availability and usability culminating at the trustworthiness of cybersystems and cyber information is at stake. The precautionary need derives from the existence of defenders versus adversaries, in an everlasting Darwinian scenario dating back to early human history of warriors fighting for their sustenance to survive. Fast forwarding to today's information warfare, whether in networks or healthcare or national security, the currently dire situation necessitates more than a hand calculator to optimize (maximize gains or minimize losses) risk due to prevailing scarce economic resources. This article reviews the previous works completed on this specialized topic of game‐theoretic computing, its methods and applications toward the purpose of quantitative risk assessment and cost‐optimal management in many diverse disciplines including entire range of informatics‐related topics. Additionally, this review considers certain game‐theoretic topics in depth historically, and those computationally resourceful such as Neumann's two‐way zero‐sum pure equilibrium and optimal mixed strategy solutions versus Nash equilibria with pure and mixed strategies. Computational examples are provided to highlight the significance of game‐theoretic solutions used in risk assessment and management, particularly in reference to cybersystems and information security. WIREs Comput Stat 2012, 4:227–248. doi: 10.1002/wics.1205 This article is categorized under:
- Algorithms and Computational Methods > Linear Programming
- Algorithms and Computational Methods > Networks and Security
60.
Carey E. Priebe Jeffrey L. Solka David J. Marchette Avory C. Bryant 《Statistical Analysis and Data Mining》2012,5(3):178-186
‘The identification of potential breakthroughs before they happen’ is a vague data analysis problem and ‘the scientific literature’ is a massive, complex dataset. Hence QHS for MTS might seem to be prototypical of the data miner's lament: ‘Here's some data we have… can you find something interesting?’ Nonetheless, the problem is real and important, and we develop an innovative statistical approach thereto—not a final etched‐in‐stone approach, but perhaps the first complete quantitative methodology explicitly addressing QHS for MTS. © 2012 Wiley Periodicals, Inc. Statistical Analysis and Data Mining5: 178–186, 2012 相似文献