首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   20220篇
  免费   351篇
  国内免费   1篇
管理学   2781篇
民族学   155篇
人口学   3269篇
丛书文集   33篇
理论方法论   1338篇
综合类   450篇
社会学   9044篇
统计学   3502篇
  2023年   85篇
  2022年   63篇
  2021年   74篇
  2020年   220篇
  2019年   266篇
  2018年   1904篇
  2017年   2020篇
  2016年   1360篇
  2015年   285篇
  2014年   334篇
  2013年   1957篇
  2012年   747篇
  2011年   1385篇
  2010年   1254篇
  2009年   980篇
  2008年   1065篇
  2007年   1232篇
  2006年   230篇
  2005年   416篇
  2004年   413篇
  2003年   367篇
  2002年   247篇
  2001年   214篇
  2000年   187篇
  1999年   161篇
  1998年   134篇
  1997年   137篇
  1996年   159篇
  1995年   116篇
  1994年   92篇
  1993年   128篇
  1992年   145篇
  1991年   125篇
  1990年   136篇
  1989年   116篇
  1988年   121篇
  1987年   115篇
  1986年   103篇
  1985年   88篇
  1984年   117篇
  1983年   98篇
  1982年   94篇
  1981年   63篇
  1980年   94篇
  1979年   106篇
  1978年   74篇
  1977年   83篇
  1976年   71篇
  1975年   87篇
  1974年   69篇
排序方式: 共有10000条查询结果,搜索用时 15 毫秒
991.
A common objective of cohort studies and clinical trials is to assess time-varying longitudinal continuous biomarkers as correlates of the instantaneous hazard of a study endpoint. We consider the setting where the biomarkers are measured in a designed sub-sample (i.e., case-cohort or two-phase sampling design), as is normative for prevention trials. We address this problem via joint models, with underlying biomarker trajectories characterized by a random effects model and their relationship with instantaneous risk characterized by a Cox model. For estimation and inference we extend the conditional score method of Tsiatis and Davidian (Biometrika 88(2):447–458, 2001) to accommodate the two-phase biomarker sampling design using augmented inverse probability weighting with nonparametric kernel regression. We present theoretical properties of the proposed estimators and finite-sample properties derived through simulations, and illustrate the methods with application to the AIDS Clinical Trials Group 175 antiretroviral therapy trial. We discuss how the methods are useful for evaluating a Prentice surrogate endpoint, mediation, and for generating hypotheses about biological mechanisms of treatment efficacy.  相似文献   
992.
Multivariate control charts are used to monitor stochastic processes for changes and unusual observations. Hotelling's T2 statistic is calculated for each new observation and an out‐of‐control signal is issued if it goes beyond the control limits. However, this classical approach becomes unreliable as the number of variables p approaches the number of observations n, and impossible when p exceeds n. In this paper, we devise an improvement to the monitoring procedure in high‐dimensional settings. We regularise the covariance matrix to estimate the baseline parameter and incorporate a leave‐one‐out re‐sampling approach to estimate the empirical distribution of future observations. An extensive simulation study demonstrates that the new method outperforms the classical Hotelling T2 approach in power, and maintains appropriate false positive rates. We demonstrate the utility of the method using a set of quality control samples collected to monitor a gas chromatography–mass spectrometry apparatus over a period of 67 days.  相似文献   
993.
This paper is about satisficing behaviour. Rather tautologically, this is when decision-makers are satisfied with achieving some objective, rather than in obtaining the best outcome. The term was coined by Simon (Q J Econ 69:99–118, 1955), and has stimulated many discussions and theories. Prominent amongst these theories are models of incomplete preferences, models of behaviour under ambiguity, theories of rational inattention, and search theories. Most of these, however, seem to lack an answer to at least one of two key questions: when should the decision-maker (DM) satisfice; and how should the DM satisfice. In a sense, search models answer the latter question (in that the theory tells the DM when to stop searching), but not the former; moreover, usually the question as to whether any search at all is justified is left to a footnote. A recent paper by Manski (Theory Decis. doi: 10.1007/s11238-017-9592-1, 2017) fills the gaps in the literature and answers the questions: when and how to satisfice? He achieves this by setting the decision problem in an ambiguous situation (so that probabilities do not exist, and many preference functionals can therefore not be applied) and by using the Minimax Regret criterion as the preference functional. The results are simple and intuitive. This paper reports on an experimental test of his theory. The results show that some of his propositions (those relating to the ‘how’) appear to be empirically valid while others (those relating to the ‘when’) are less so.  相似文献   
994.
In this study, we analyze choice in the presence of some conflict that affects the decision time (response time), a subject that has been documented in the literature. We axiomatize a multiattribute decision time (MDT) representation, which is a dynamic extension of the classic multiattribute expected utility theory that allows potentially incomplete preferences. Under this framework, one alternative is preferred to another in a certain period if and only if the weighted sum of the attribute-dependent expected utility induced by the former alternative is larger than that induced by the latter for all attribute weights in a closed and convex set. MDT uniquely determines the decision time as the earliest period at which the ranking between alternatives becomes decisive. The comparative statics result indicates that the decision time provides useful information to locate indifference curves in a specific setting. MDT also explains various empirical findings in economics and other relevant fields.  相似文献   
995.
A jury and two valid options are given. Each agent of the jury picks exactly one of these options. The option with the most votes will be chosen by the jury. In the N-couple model of Althöfer and Thiele (Theory and Decision 81:1–15, 2016), the jury consisted of 2N agents. These agents form N independent couples, with dependencies within the couples. The authors assumed that the agents who form a couple have the same competence level. In this note, we relax this assumption by allowing different competence levels within the couples. We show that the theoretical results of Althöfer and Thiele remain valid under this relaxation.  相似文献   
996.
This paper has a twofold scope. The first one is to clarify and put in evidence the isomorphic character of two theories developed in quite different fields: on one side, threshold logic, on the other side, simple games. One of the main purposes in both theories is to determine when a simple game is representable as a weighted game, which allows a very compact and easily comprehensible representation. Deep results were found in threshold logic in the sixties and seventies for this problem. However, game theory has taken the lead and some new results have been obtained for the problem in the past two decades. The second and main goal of this paper is to provide some new results on this problem and propose several open questions and conjectures for future research. The results we obtain depend on two significant parameters of the game: the number of types of equivalent players and the number of types of shift-minimal winning coalitions.  相似文献   
997.
Asymmetric Choquet random walks are defined, in the form of dynamically consistent random walks allowing for asymmetric conditional capacities. By revisiting Kast and Lapied (Dynamically consistent Choquet random walk and real investments. Document de Travail n. 2010-33, GREQAM, HAL id: halhs-00533826, 2010b) and Kast et al. (Econ Model, 38:495–503, 2014) we show that some findings regarding the effects of ambiguity aversion are preserved in the more general framework, which is of interest in several applications to policy making, risk management, corporate decisions, real option valuation of investment/ disinvestment projects, etc. The effect of ambiguity on the higher moments is investigated, as well, as they have an interpretation in terms of the psychological attitude of a decision-maker towards ambiguity. Finally, some financial applications are provided as an illustration.  相似文献   
998.
We investigate resolute voting rules that always rank two alternatives strictly and avoid social indecision. Resolute majority rules differ from the standard majority rule in that whenever both alternatives win the same number of votes, a tie-breaking function is used to determine the outcome. We provide axiomatic characterizations of resolute majority rules or resolute majority rules with a quorum. Resoluteness axiom is used in all these results. The other axioms are weaker than those considered in the characterization of the majority rule by May (1952 Econometrica, 20:680–684). In particular, instead of May’s positive responsiveness, we consider a much weaker monotonicity axiom.  相似文献   
999.
The main goal of this paper is to investigate which normative requirements, or axioms, lead to exponential and quasi-hyperbolic forms of discounting. Exponential discounting has a well-established axiomatic foundation originally developed by Koopmans (Econometrica 28(2):287–309, 1960, 1972) and Koopmans et al. (Econometrica 32(1/2):82–100, 1964) with subsequent contributions by several other authors, including Bleichrodt et al. (J Math Psychol 52(6):341–347, 2008). The papers by Hayashi (J Econ Theory 112(2):343–352, 2003) and Olea and Strzalecki (Q J Econ 129(3):1449–1499, 2014) axiomatize quasi-hyperbolic discounting. The main contribution of this paper is to provide an alternative foundation for exponential and quasi-hyperbolic discounting, with simple, transparent axioms and relatively straightforward proofs. Using techniques by Fishburn (The foundations of expected utility. Reidel Publishing Co, Dordrecht, 1982) and Harvey (Manag Sci 32(9):1123–1139, 1986), we show that Anscombe and Aumann’s (Ann Math Stat 34(1):199–205, 1963) version of Subjective Expected Utility theory can be readily adapted to axiomatize the aforementioned types of discounting, in both finite and infinite horizon settings.  相似文献   
1000.
Choice under risk is modelled using a piecewise linear version of rank-dependent utility. This model can be considered a continuous version of NEO-expected utility (Chateauneuf et al., J Econ Theory 137:538–567, 2007). In a framework of objective probabilities, a preference foundation is given, without requiring a rich structure on the outcome set. The key axiom is called complementary additivity.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号