全文获取类型
收费全文 | 5704篇 |
免费 | 175篇 |
国内免费 | 36篇 |
专业分类
管理学 | 644篇 |
劳动科学 | 1篇 |
民族学 | 6篇 |
人口学 | 94篇 |
丛书文集 | 99篇 |
理论方法论 | 283篇 |
综合类 | 774篇 |
社会学 | 288篇 |
统计学 | 3726篇 |
出版年
2024年 | 3篇 |
2023年 | 37篇 |
2022年 | 42篇 |
2021年 | 57篇 |
2020年 | 102篇 |
2019年 | 186篇 |
2018年 | 232篇 |
2017年 | 329篇 |
2016年 | 180篇 |
2015年 | 172篇 |
2014年 | 198篇 |
2013年 | 1218篇 |
2012年 | 398篇 |
2011年 | 198篇 |
2010年 | 200篇 |
2009年 | 218篇 |
2008年 | 219篇 |
2007年 | 226篇 |
2006年 | 206篇 |
2005年 | 213篇 |
2004年 | 165篇 |
2003年 | 150篇 |
2002年 | 139篇 |
2001年 | 134篇 |
2000年 | 105篇 |
1999年 | 91篇 |
1998年 | 73篇 |
1997年 | 62篇 |
1996年 | 40篇 |
1995年 | 34篇 |
1994年 | 42篇 |
1993年 | 32篇 |
1992年 | 36篇 |
1991年 | 25篇 |
1990年 | 26篇 |
1989年 | 24篇 |
1988年 | 22篇 |
1987年 | 13篇 |
1986年 | 8篇 |
1985年 | 10篇 |
1984年 | 7篇 |
1983年 | 15篇 |
1982年 | 9篇 |
1981年 | 3篇 |
1980年 | 4篇 |
1979年 | 4篇 |
1978年 | 4篇 |
1977年 | 3篇 |
1976年 | 1篇 |
排序方式: 共有5915条查询结果,搜索用时 843 毫秒
71.
本文分析了传统FAGM(1,1)模型建模过程中存在的误差,提出了一种基于Simpson公式改进的FAGM(1,1)模型。首先,基于分数阶累加生成算子和分数阶累减生成算子建立分数阶FAGM(1,1)模型。其次,利用Simpson积分公式对FAGM(1,1)模型的背景值进行改进,建立SFAGM(1,1)模型。进一步,应用遗传算法确定SFAGM(1,1)模型的最优阶数以提高模型的预测精度。最后,以中国人均GDP为例,对比分析GM(1,1)模型、Simpson改进的GM(1,1)模型(SGM(1,1))、FAGM(1,1)模型、SFAGM(1,1)模型的模拟结果,并对"十三五"时期的人均GDP进行预测,其结果表明SFAGM(1,1)模型比GM(1,1)、SGM(1,1)、FAGM(1,1)在人均GDP的预测方面有更高的精度,"十三五"期间人均GDP年平均增长率为10.64%,到2020年达到83146.97元,是2010年人均GDP的2.69倍,以2010年的人均GDP为基准,到2020年将能够实现翻一番的目标。 相似文献
72.
Robert F. Nau 《Journal of Risk and Uncertainty》1995,10(1):71-91
This article explores the extent to which a decision maker's probabilities can be measured separately from his/her utilities by observing his/her acceptance of small monetary gambles. Only a partial separation is achieved: the acceptable gambles are partitioned into a set of belief gambles, which reveals probabilities distorted by marginal utilities for money, and a set of preference gambles, which reveals utilities reciprocally distorted by marginal utilities for money. However, the information in these gambles still enables us to solve the decision maker's problem: his/her utility-maximizing decision is the one that avoids arbitrage (i.e., incoherence or Dutch books). 相似文献
73.
This paper deals with techniques for obtaining random point samples from spatial databases. We seek random points from a continuous domain (usually 2) which satisfy a spatial predicate that is represented in the database as a collection of polygons. Several applications of spatial sampling (e.g. environmental monitoring, agronomy, forestry, etc) are described. Sampling problems are characterized in terms of two key parameters: coverage (selectivity), and expected stabbing number (overlap). We discuss two fundamental approaches to sampling with spatial predicates, depending on whether we sample first or evaluate the predicate first. The approaches are described in the context of both quadtrees and R-trees, detailing the sample first, acceptance/rejection tree, and partial area tree algorithms. A sequential algorithm, the one-pass spatial reservoir algorithm is also described. The relative performance of the various sampling algorithms is compared and choice of preferred algorithms is suggested. We conclude with a short discussion of possible extensions. 相似文献
74.
75.
Inge S. Helland 《Scandinavian Journal of Statistics》1998,25(1):3-15
Several authors have contributed to what can now be considered a rather complete theory for analysis of variance in cases with orthogonal factors. By using this theory on an assumed basic reference population, the orthogonality concept gives a natural definition of independence between factors in the population. By looking upon the treated units in designed experiments as a formal sample from a future population about which we want to make inference, a natural parametrization of expectations and variances connected to such experiments arises. This approach seems to throw light upon several controversial questions in the theory of mixed models. Also, it gives a framework for discussing the choice of conditioning in models 相似文献
76.
Generalized Leverage and its Applications 总被引:2,自引:0,他引:2
The generalized leverage of an estimator is defined in regression models as a measure of the importance of individual observations. We derive a simple but powerful result, developing an explicit expression for leverage in a general M -estimation problem, of which the maximum likelihood problems are special cases. A variety of applications are considered, most notably to the exponential family non-linear models. The relationship between leverage and local influence is also discussed. Numerical examples are given to illustrate our results 相似文献
77.
A Multivariate Model for Repeated Failure Time Measurements 总被引:1,自引:1,他引:0
Martin Crowder 《Scandinavian Journal of Statistics》1998,25(1):53-67
A parametric multivariate failure time distribution is derived from a frailty-type model with a particular frailty distribution. It covers as special cases certain distributions which have been used for multivariate survival data in recent years. Some properties of the distribution are derived: its marginal and conditional distributions lie within the parametric family, and association between the component variates can be positive or, to a limited extent, negative. The simple closed form of the survivor function is useful for right-censored data, as occur commonly in survival analysis, and for calculating uniform residuals. Also featured is the distribution of ratios of paired failure times. The model is applied to data from the literature 相似文献
78.
J. E. Kelsall & P. J. Diggle 《Journal of the Royal Statistical Society. Series C, Applied statistics》1998,47(4):559-573
A common problem in environmental epidemiology is the estimation and mapping of spatial variation in disease risk. In this paper we analyse data from the Walsall District Health Authority, UK, concerning the spatial distributions of cancer cases compared with controls sampled from the population register. We formulate the risk estimation problem as a nonparametric binary regression problem and consider two different methods of estimation. The first uses a standard kernel method with a cross-validation criterion for choosing the associated bandwidth parameter. The second uses the framework of the generalized additive model (GAM) which has the advantage that it can allow for additional explanatory variables, but is computationally more demanding. For the Walsall data, we obtain similar results using either the kernel method with controls stratified by age and sex to match the age–sex distribution of the cases or the GAM method with random controls but incorporating age and sex as additional explanatory variables. For cancers of the lung or stomach, the analysis shows highly statistically significant spatial variation in risk. For the less common cancers of the pancreas, the spatial variation in risk is not statistically significant. 相似文献
79.
This article characterizes a family of preference relations over uncertain prospects that (a) are dynamically consistent in the Machina sense and, moreover, for which the updated preferences are also members of this family and (b) can simultaneously accommodate Ellsberg- and Allais-type paradoxes.Replacing the "mixture independence" axiom by "mixture symmetry" proposed by Chew, Epstein, and Segal (1991) for decision making under objective risk, and requiring that for some partition of the state space the agent perceives ambiguity and so prefers a randomization over outcomes across that partition (proper uncertainty aversion), preferences can be represented by a (proper) quadratic functional. This representation may be further refined to allow a separation between the quantification of beliefs and risk preferences that is closed under dynamically consistent updating. 相似文献
80.
Louis Marinoff 《Theory and Decision》1993,35(1):55-73
In quantum domains, the measurement (or observation) of one of a pair of complementary variables introduces an unavoidable uncertainty in the value of that variable's complement. Such uncertainties are negligible in Newtonian worlds, where observations can be made without appreciably disturbing the observed system. Hence, one would not expect that an observation of a non-quantum probabilistic outcome could affect a probability distribution over subsequently possible states, in a way that would conflict with classical probability calculations. This paper examines three problems in which observations appear to affect the probabilities and expected utilities of subsequent outcomes, in ways which may appear paradoxical. Deeper analysis of these problems reveals that the anomalies arise, not from paradox, but rather from faulty inferences drawn from the observations themselves. Thus the notion of quantum decision theory is disparaged. 相似文献