首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   1755篇
  免费   46篇
  国内免费   33篇
管理学   369篇
民族学   3篇
人口学   24篇
丛书文集   40篇
理论方法论   50篇
综合类   423篇
社会学   52篇
统计学   873篇
  2024年   5篇
  2023年   13篇
  2022年   33篇
  2021年   24篇
  2020年   51篇
  2019年   60篇
  2018年   83篇
  2017年   119篇
  2016年   76篇
  2015年   60篇
  2014年   82篇
  2013年   278篇
  2012年   117篇
  2011年   83篇
  2010年   72篇
  2009年   69篇
  2008年   54篇
  2007年   66篇
  2006年   65篇
  2005年   69篇
  2004年   50篇
  2003年   52篇
  2002年   32篇
  2001年   34篇
  2000年   30篇
  1999年   41篇
  1998年   18篇
  1997年   20篇
  1996年   11篇
  1995年   14篇
  1994年   6篇
  1993年   5篇
  1992年   7篇
  1991年   7篇
  1990年   5篇
  1989年   4篇
  1988年   2篇
  1987年   4篇
  1986年   2篇
  1984年   4篇
  1983年   1篇
  1982年   3篇
  1980年   1篇
  1976年   2篇
排序方式: 共有1834条查询结果,搜索用时 31 毫秒
21.
心理语言学家认为交谈者双方的诸多特性因素无疑会影响交谈过程的本质,这些特性包括交谈双方的年龄、性别、社会地位、种族背景、友谊的程度和关系的亲密程度.由美国著名作家约翰·斯坦贝克所著的《人与鼠》是一部极其感人并长期为读者喜爱的作品.该作品描写了两个劳动工人间真挚的友谊,相互依存以及共同的梦想.从《人与鼠》中,我们可以得知两人物之间的对话是有特性的,他们真势的友谊,相互依存以及共同的梦想是影响其交流方式最重要的特性.通过分析其背景,共同话题,共同梦想,我们可以总结出两人物的谈话方式是与心理语言学交谈的本质特性相符.  相似文献   
22.
蒲松  夏嫦 《中国管理科学》2021,29(5):166-172
城市医疗废弃物日益增加,且回收需求量受诸多因素的影响,难以准确预测,假定回收需求为确定值的医疗废弃物网络优化设计不能与实际需求相匹配。本文考虑了离散随机参数环境下,医疗回收网络设计中选址规划、分配计划及运输规划的协同优化问题,建立了以选址成本、运输成本最小为目标,设施与车辆能力限制为约束的二阶段随机规划模型。根据模型特点,设计了基于Benders decomposition的求解算法,同时,设计了一系列加速技术用于提高算法的求解效率。最后,以国内某城市医疗回收网络为背景设计算例,检验本文模型和求解策略的可行性和有效性。结果表明:相比确定性规划,随机规划的解能够节约总成本,结合一系列加速技术的Benders decomposition方法比CPLEX与纯的Benders decomposition更有优势。  相似文献   
23.
LetX1,X2, ..., be real-valued random variables forming a strictly stationary sequence, and satisfying the basic requirement of being either pairwise positively quadrant dependent or pairwise negatively quadrant dependent. LetF^ be the marginal distribution function of theXips, which is estimated by the empirical distribution functionFn and also by a smooth kernel-type estimateFn, by means of the segmentX1, ...,Xn. These estimates are compared on the basis of their mean squared errors (MSE). The main results of this paper are the following. Under certain regularity conditions, the optimal bandwidth (in the MSE sense) is determined, and is found to be the same as that in the independent identically distributed case. It is also shown thatn MSE(Fn(t)) andnMSE (F^n(t)) tend to the same constant, asn→∞ so that one can not discriminate be tween the two estimates on the basis of the MSE. Next, ifi(n) = min {k∈{1, 2, ...}; MSE (Fk(t)) ≤ MSE (Fn(t))}, then it is proved thati(n)/n tends to 1, asn→∞. Thus, once again, one can not choose one estimate over the other in terms of their asymptotic relative efficiency. If, however, the squared bias ofF^n(t) tends to 0 sufficiently fast, or equivalently, the bandwidthhn satisfies the requirement thatnh3n→ 0, asn→∞, it is shown that, for a suitable choice of the kernel, (i(n) ?n)/(nhn) tends to a positive number, asn→∞ It follows that the deficiency ofFn(t) with respect toF^n(t),i(n) ?n, is substantial, and, actually, tends to ∞, asn→∞. In terms of deficiency, the smooth estimateF^n(t) is preferable to the empirical distribution functionFn(t)  相似文献   
24.
In risk assessment, the moment‐independent sensitivity analysis (SA) technique for reducing the model uncertainty has attracted a great deal of attention from analysts and practitioners. It aims at measuring the relative importance of an individual input, or a set of inputs, in determining the uncertainty of model output by looking at the entire distribution range of model output. In this article, along the lines of Plischke et al., we point out that the original moment‐independent SA index (also called delta index) can also be interpreted as the dependence measure between model output and input variables, and introduce another moment‐independent SA index (called extended delta index) based on copula. Then, nonparametric methods for estimating the delta and extended delta indices are proposed. Both methods need only a set of samples to compute all the indices; thus, they conquer the problem of the “curse of dimensionality.” At last, an analytical test example, a risk assessment model, and the levelE model are employed for comparing the delta and the extended delta indices and testing the two calculation methods. Results show that the delta and the extended delta indices produce the same importance ranking in these three test examples. It is also shown that these two proposed calculation methods dramatically reduce the computational burden.  相似文献   
25.
Dr. Yellman proposes to define frequency as “a time‐rate of events of a specified type over a particular time interval.” We review why no definition of frequency, including this one, can satisfy both of two conditions: (1) the definition should agree with the ordinary meaning of frequency, such as that less frequent events are less likely to occur than more frequent events, over any particular time interval for which the frequencies of both are defined; and (2) the definition should be applicable not only to exponentially distributed times between (or until) events, but also to some nonexponential (e.g., uniformly distributed) times. We make the simple point that no definition can satisfy (1) and (2) by showing that any definition that determines which of any two uniformly distributed times has the higher “frequency” (or that determines that they have the same “frequency,” if neither is higher) must assign a higher frequency number to the distribution with the lower probability of occurrence over some time intervals. Dr. Yellman's proposed phrase, “time‐rate of events … over a particular time interval” is profoundly ambiguous in such cases, as the instantaneous failure rates vary over an infinitely wide range (e.g., from one to infinity), making it unclear which value is denoted by the phrase “time‐rate of events.”  相似文献   
26.
In this paper, we characterise a family of bivariate copulas whose sections between the main diagonal and the border of the unit square are polynomial, generalising several families of copulas, including those with quadratic and cubic sections. We also study a measure of association and the tail dependence for this class, illustrating our results with several examples.  相似文献   
27.
28.
This article applies the methods of stochastic dynamic programming to a risk management problem, where an agent hedges her derivative position by submitting limit orders. Therefore, this model is the first, in the literature on optimal trading with limit orders, to handle a problem of hedging options or other derivatives. A hedging strategy is developed where both the size and the limit price of each order is optimally set.  相似文献   
29.
As flood risks grow worldwide, a well‐designed insurance program engaging various stakeholders becomes a vital instrument in flood risk management. The main challenge concerns the applicability of standard approaches for calculating insurance premiums of rare catastrophic losses. This article focuses on the design of a flood‐loss‐sharing program involving private insurance based on location‐specific exposures. The analysis is guided by a developed integrated catastrophe risk management (ICRM) model consisting of a GIS‐based flood model and a stochastic optimization procedure with respect to location‐specific risk exposures. To achieve the stability and robustness of the program towards floods with various recurrences, the ICRM uses stochastic optimization procedure, which relies on quantile‐related risk functions of a systemic insolvency involving overpayments and underpayments of the stakeholders. Two alternative ways of calculating insurance premiums are compared: the robust derived with the ICRM and the traditional average annual loss approach. The applicability of the proposed model is illustrated in a case study of a Rotterdam area outside the main flood protection system in the Netherlands. Our numerical experiments demonstrate essential advantages of the robust premiums, namely, that they: (1) guarantee the program's solvency under all relevant flood scenarios rather than one average event; (2) establish a tradeoff between the security of the program and the welfare of locations; and (3) decrease the need for other risk transfer and risk reduction measures.  相似文献   
30.
Multivariate stochastic volatility models with skew distributions are proposed. Exploiting Cholesky stochastic volatility modeling, univariate stochastic volatility processes with leverage effect and generalized hyperbolic skew t-distributions are embedded to multivariate analysis with time-varying correlations. Bayesian modeling allows this approach to provide parsimonious skew structure and to easily scale up for high-dimensional problem. Analyses of daily stock returns are illustrated. Empirical results show that the time-varying correlations and the sparse skew structure contribute to improved prediction performance and Value-at-Risk forecasts.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号