全文获取类型
收费全文 | 1552篇 |
免费 | 43篇 |
专业分类
管理学 | 174篇 |
民族学 | 9篇 |
人才学 | 2篇 |
人口学 | 106篇 |
丛书文集 | 13篇 |
理论方法论 | 153篇 |
综合类 | 6篇 |
社会学 | 898篇 |
统计学 | 234篇 |
出版年
2023年 | 20篇 |
2022年 | 5篇 |
2021年 | 14篇 |
2020年 | 36篇 |
2019年 | 37篇 |
2018年 | 34篇 |
2017年 | 61篇 |
2016年 | 48篇 |
2015年 | 28篇 |
2014年 | 45篇 |
2013年 | 269篇 |
2012年 | 71篇 |
2011年 | 49篇 |
2010年 | 55篇 |
2009年 | 40篇 |
2008年 | 46篇 |
2007年 | 61篇 |
2006年 | 54篇 |
2005年 | 50篇 |
2004年 | 61篇 |
2003年 | 41篇 |
2002年 | 35篇 |
2001年 | 43篇 |
2000年 | 29篇 |
1999年 | 26篇 |
1998年 | 23篇 |
1997年 | 20篇 |
1996年 | 23篇 |
1995年 | 15篇 |
1994年 | 16篇 |
1993年 | 18篇 |
1992年 | 11篇 |
1991年 | 18篇 |
1990年 | 17篇 |
1989年 | 8篇 |
1988年 | 14篇 |
1987年 | 10篇 |
1986年 | 18篇 |
1985年 | 12篇 |
1984年 | 12篇 |
1983年 | 17篇 |
1982年 | 13篇 |
1981年 | 10篇 |
1980年 | 7篇 |
1979年 | 11篇 |
1978年 | 5篇 |
1976年 | 5篇 |
1975年 | 5篇 |
1974年 | 7篇 |
1967年 | 4篇 |
排序方式: 共有1595条查询结果,搜索用时 15 毫秒
21.
Neil A. Butler Roger Mead Kent M. Eskridge & Steven G. Gilmour 《Journal of the Royal Statistical Society. Series B, Statistical methodology》2001,63(3):621-632
There has been much recent interest in supersaturated designs and their application in factor screening experiments. Supersaturated designs have mainly been constructed by using the E ( s 2 )-optimality criterion originally proposed by Booth and Cox in 1962. However, until now E ( s 2 )-optimal designs have only been established with certainty for n experimental runs when the number of factors m is a multiple of n-1 , and in adjacent cases where m = q ( n -1) + r (| r | 2, q an integer). A method of constructing E ( s 2 )-optimal designs is presented which allows a reasonably complete solution to be found for various numbers of runs n including n ,=8 12, 16, 20, 24, 32, 40, 48, 64. 相似文献
22.
Steven P. Ellis 《统计学通讯:模拟与计算》2013,42(7):1006-1029
Estimators are often defined as the solutions to data dependent optimization problems. A common form of objective function (function to be optimized) that arises in statistical estimation is the sum of a convex function V and a quadratic complexity penalty. A standard paradigm for creating kernel-based estimators leads to such an optimization problem. This article describes an optimization algorithm designed for unconstrained optimization problems in which the objective function is the sum of a non negative convex function and a known quadratic penalty. The algorithm is described and compared with BFGS on some penalized logistic regression and penalized L 3/2 regression problems. 相似文献
23.
Xiaofeng Steven Liu 《统计学通讯:模拟与计算》2013,42(6):1433-1436
This short article shows an unified approach to representing and computing the cumulative distribution function for noncentral t, F, and χ2. Unlike the existing algorithms, which involve different expansion and/or recurrence, the new approach consistently represents all the three noncentral cumulative distribution functions as the integral of the normal cumulative distribution function and χ2 density function. 相似文献
24.
Steven N. Braun 《商业与经济统计学杂志》2013,31(3):293-304
Two methods of using labor-market data as indicators of contemporaneous gross national product (GNP) are developed. The establishment survey data are used by inverting a partial-adjustment equation for hours. A second GNP forecast can be extracted from the household survey by using Okun's law. Using preliminary rather than final data adds about .2 to .4 percentage point to the expected value of the root mean squared errors and changes the weights that the pooling procedure assigns to the two forecasts. The use of preliminary rather than final data results in a procedure that assigns more importance to the Okun's-law forecast. 相似文献
25.
In this paper we provide a comprehensive Bayesian posterior analysis of trend determination in general autoregressive models. Multiple lag autoregressive models with fitted drifts and time trends as well as models that allow for certain types of structural change in the deterministic components are considered. We utilize a modified information matrix-based prior that accommodates stochastic nonstationarity, takes into account the interactions between long-run and short-run dynamics and controls the degree of stochastic nonstationarity permitted. We derive analytic posterior densities for all of the trend determining parameters via the Laplace approximation to multivariate integrals. We also address the sampling properties of our posteriors under alternative data generating processes by simulation methods. We apply our Bayesian techniques to the Nelson-Plosser macroeconomic data and various stock price and dividend data. Contrary to DeJong and Whiteman (1989a,b,c), we do not find that the data overwhelmingly favor the existence of deterministic trends over stochastic trends. In addition, we find evidence supporting Perron's (1989) view that some of the Nelson and Plosser data are best construed as trend stationary with a change in the trend function occurring at 1929. 相似文献
26.
Steven B. Cohen 《The American statistician》2013,67(4):337-341
The National Medical Care Expenditure Survey, which has a complex survey design, is further complicated by combining two independently drawn national samples of households from the Research Triangle Institute (RTI) and the National Opinion Research Center (NORC). It is assumed that since the structures of these national-area samples are similar, they are thereby compatible and allow for the derivation of statistically equivalent unbiased national estimates of relevant health parameters. In this paper, national parameter estimates of relevant demographic and health measures were separately produced for each of the two survey organizations and a determination was made regarding statistical equivalence across samples. In addition, the data-collection organization effect was estimated and tested within the framework of a multivariate analysis appropriate for complex survey data. The findings consistently revealed a statistically nonsignificant data-collection organization effect when testing for differences in domain estimates of the relevant health parameters under consideration. 相似文献
27.
This department includes the two sections New Developments in Statistical Computing and Statistical Computing Software Reviews; suitable contents for each of these sections are described under the respective section heading. Articles submitted for the department, outside the two sections, should not be highly technical and should be relevant to the teaching or practice of statistical computing. An unbiased estimator of e is used to motivate a simple simulation exercise that requires only observations from the distribution uniform on (0, 1). Antithetic variables are introduced and applied to the simulation problem to give a second unbiased estimator of e with reduced variance. 相似文献
28.
The delete-a-group jackknife is sometimes used when estimating the variances of statistics based on a large sample. We investigate heavily poststratified estimators for a population mean and a simple regression coefficient, where both full-sample and domain estimates are of interest. The delete-a-group (DAG) jackknife employing 30, 60, and 100 replicates is found to be highly unstable, even for large sample sizes. The empirical degrees of freedom of these DAG jackknives are usually much less than their nominal degrees of freedom. This analysis calls into question whether coverage intervals derived from replication-based variance estimators can be trusted for highly calibrated estimates. 相似文献
29.
Simplified proofs are given of a standard result that establishes positive semi–definiteness of the difference of the inverses of two non–singular matrices, and of the extension of this result by Milliken and Akdeniz (1977) to the difference of the Moore–Penrose inverse of two singular matrices. 相似文献
30.
There is an emerging consensus in empirical finance that realized volatility series typically display long range dependence with a memory parameter (d) around 0.4 (Andersen et al., 2001; Martens et al., 2004). The present article provides some illustrative analysis of how long memory may arise from the accumulative process underlying realized volatility. The article also uses results in Lieberman and Phillips (2004, 2005) to refine statistical inference about d by higher order theory. Standard asymptotic theory has an O(n ?1/2) error rate for error rejection probabilities, and the theory used here refines the approximation to an error rate of o(n ?1/2). The new formula is independent of unknown parameters, is simple to calculate and user-friendly. The method is applied to test whether the reported long memory parameter estimates of Andersen et al. (2001) and Martens et al. (2004) differ significantly from the lower boundary (d = 0.5) of nonstationary long memory, and generally confirms earlier findings. 相似文献