首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   18365篇
  免费   188篇
  国内免费   1篇
管理学   2587篇
民族学   147篇
人口学   3054篇
丛书文集   23篇
理论方法论   1134篇
综合类   359篇
社会学   7736篇
统计学   3514篇
  2023年   49篇
  2022年   29篇
  2021年   56篇
  2020年   127篇
  2019年   192篇
  2018年   1851篇
  2017年   1960篇
  2016年   1277篇
  2015年   168篇
  2014年   247篇
  2013年   1604篇
  2012年   554篇
  2011年   1373篇
  2010年   1232篇
  2009年   987篇
  2008年   1017篇
  2007年   1171篇
  2006年   194篇
  2005年   413篇
  2004年   402篇
  2003年   352篇
  2002年   261篇
  2001年   211篇
  2000年   213篇
  1999年   196篇
  1998年   139篇
  1997年   127篇
  1996年   138篇
  1995年   96篇
  1994年   137篇
  1993年   97篇
  1992年   127篇
  1991年   132篇
  1990年   116篇
  1989年   104篇
  1988年   101篇
  1987年   91篇
  1986年   90篇
  1985年   89篇
  1984年   93篇
  1983年   69篇
  1982年   79篇
  1981年   51篇
  1980年   64篇
  1979年   63篇
  1978年   51篇
  1977年   52篇
  1976年   39篇
  1975年   48篇
  1974年   40篇
排序方式: 共有10000条查询结果,搜索用时 31 毫秒
921.
The main goal of this paper is to investigate which normative requirements, or axioms, lead to exponential and quasi-hyperbolic forms of discounting. Exponential discounting has a well-established axiomatic foundation originally developed by Koopmans (Econometrica 28(2):287–309, 1960, 1972) and Koopmans et al. (Econometrica 32(1/2):82–100, 1964) with subsequent contributions by several other authors, including Bleichrodt et al. (J Math Psychol 52(6):341–347, 2008). The papers by Hayashi (J Econ Theory 112(2):343–352, 2003) and Olea and Strzalecki (Q J Econ 129(3):1449–1499, 2014) axiomatize quasi-hyperbolic discounting. The main contribution of this paper is to provide an alternative foundation for exponential and quasi-hyperbolic discounting, with simple, transparent axioms and relatively straightforward proofs. Using techniques by Fishburn (The foundations of expected utility. Reidel Publishing Co, Dordrecht, 1982) and Harvey (Manag Sci 32(9):1123–1139, 1986), we show that Anscombe and Aumann’s (Ann Math Stat 34(1):199–205, 1963) version of Subjective Expected Utility theory can be readily adapted to axiomatize the aforementioned types of discounting, in both finite and infinite horizon settings.  相似文献   
922.
Choice under risk is modelled using a piecewise linear version of rank-dependent utility. This model can be considered a continuous version of NEO-expected utility (Chateauneuf et al., J Econ Theory 137:538–567, 2007). In a framework of objective probabilities, a preference foundation is given, without requiring a rich structure on the outcome set. The key axiom is called complementary additivity.  相似文献   
923.
We study behavioral patterns of insurance demand for low-probability large-loss events (catastrophic losses). Individual patterns of belief formation and risk attitude that were suggested in the behavioral decisions literature emerge robustly in the current set of insurance choices. However, social comparison effects are less robust. We do not find any evidence for peer effects (through social-loss aversion or imitation) on insurance take-up. In contrast, we find support for the prediction that people underweight others’ relevant information in their own decision making.  相似文献   
924.
Bernheim and Whinston (Q J Econ 101:1–31, 1986) show that, in a common agency problem without budget constraints, the set of Nash equilibria with truthful strategies (TNE), the set of coalition-proof Nash equilibria (CPNE), and the principal-optimal core of the underlying coalitional game are non-empty and all equivalent in payoff space. We show that, with budget constraints, none of Bernheim and Whinston’s (Q J Econ 101:1–31, 1986) results hold: (i) a CPNE may not exist, (ii) a TNE may not exist even when a CPNE exists, (iii) a TNE may not be a CPNE, and (iv) both TNE and CPNE payoffs are core allocations but are not necessarily principal-optimal. However, when principals have outside options but no budget constraints, (i), and (iii) continue to hold but not for (ii) and (iv). In particular, a TNE always exists but the core may be empty.  相似文献   
925.
We implement a risky choice experiment based on one-dimensional choice variables and risk neutrality induced via binary lottery incentives. Each participant confronts many parameter constellations with varying optimal payoffs. We assess (sub)optimality, as well as (non)optimal satisficing by eliciting aspirations in addition to choices. Treatments differ in the probability that a binary random event, which are payoff—but not optimal choice—relevant is experimentally induced and whether participants choose portfolios directly or via satisficing, i.e., by forming aspirations and checking for satisficing before making their choice. By incentivizing aspiration formation, we can test satisficing, and in cases of satisficing, determine whether it is optimal.  相似文献   
926.
In April 2013, all of the major academic publishing houses moved thousands of journal titles to an original hybrid model, under which authors of accepted papers can choose between an expensive open access (OA) track and the traditional track available only to subscribers. This paper argues that authors might now use a publication strategy as a quality signaling device. The imperfect information game between authors and readers presents several types of Perfect Bayesian Equilibria, including a separating equilibrium in which only authors of high-quality papers are driven toward the open access track. The publishing house should choose an open-access publication fee that supports the emergence of the highest return equilibrium. Journal structures will evolve over time according to the journals’ accessibility and quality profiles.  相似文献   
927.
We present the parallel and interacting stochastic approximation annealing (PISAA) algorithm, a stochastic simulation procedure for global optimisation, that extends and improves the stochastic approximation annealing (SAA) by using population Monte Carlo ideas. The efficiency of standard SAA algorithm crucially depends on its self-adjusting mechanism which presents stability issues in high dimensional or rugged optimisation problems. The proposed algorithm involves simulating a population of SAA chains that interact each other in a manner that significantly improves the stability of the self-adjusting mechanism and the search for the global optimum in the sampling space, as well as it inherits SAA desired convergence properties when a square-root cooling schedule is used. It can be implemented in parallel computing environments in order to mitigate the computational overhead. As a result, PISAA can address complex optimisation problems that it would be difficult for SAA to satisfactory address. We demonstrate the good performance of the proposed algorithm on challenging applications including Bayesian network learning and protein folding. Our numerical comparisons suggest that PISAA outperforms the simulated annealing, stochastic approximation annealing, and annealing evolutionary stochastic approximation Monte Carlo.  相似文献   
928.
929.
Crime or disease surveillance commonly rely in space-time clustering methods to identify emerging patterns. The goal is to detect spatial-temporal clusters as soon as possible after its occurrence and to control the rate of false alarms. With this in mind, a spatio-temporal multiple cluster detection method was developed as an extension of a previous proposal based on a spatial version of the Shiryaev–Roberts statistic. Besides the capability of multiple cluster detection, the method have less input parameter than the previous proposal making its use more intuitive to practitioners. To evaluate the new methodology a simulation study is performed in several scenarios and enlighten many advantages of the proposed method. Finally, we present a case study to a crime data-set in Belo Horizonte, Brazil.  相似文献   
930.
Residual marked empirical process-based tests are commonly used in regression models. However, they suffer from data sparseness in high-dimensional space when there are many covariates. This paper has three purposes. First, we suggest a partial dimension reduction adaptive-to-model testing procedure that can be omnibus against general global alternative models although it fully use the dimension reduction structure under the null hypothesis. This feature is because that the procedure can automatically adapt to the null and alternative models, and thus greatly overcomes the dimensionality problem. Second, to achieve the above goal, we propose a ridge-type eigenvalue ratio estimate to automatically determine the number of linear combinations of the covariates under the null and alternative hypotheses. Third, a Monte-Carlo approximation to the sampling null distribution is suggested. Unlike existing bootstrap approximation methods, this gives an approximation as close to the sampling null distribution as possible by fully utilising the dimension reduction model structure under the null model. Simulation studies and real data analysis are then conducted to illustrate the performance of the new test and compare it with existing tests.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号