首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   860篇
  免费   21篇
管理学   132篇
人口学   14篇
丛书文集   7篇
理论方法论   28篇
综合类   30篇
社会学   39篇
统计学   631篇
  2023年   7篇
  2022年   1篇
  2021年   1篇
  2020年   16篇
  2019年   19篇
  2018年   26篇
  2017年   58篇
  2016年   30篇
  2015年   15篇
  2014年   42篇
  2013年   273篇
  2012年   59篇
  2011年   27篇
  2010年   20篇
  2009年   37篇
  2008年   37篇
  2007年   28篇
  2006年   20篇
  2005年   15篇
  2004年   22篇
  2003年   12篇
  2002年   15篇
  2001年   10篇
  2000年   10篇
  1999年   10篇
  1998年   14篇
  1997年   6篇
  1996年   3篇
  1995年   2篇
  1994年   4篇
  1993年   2篇
  1992年   5篇
  1991年   5篇
  1990年   1篇
  1989年   1篇
  1988年   2篇
  1987年   1篇
  1986年   2篇
  1985年   3篇
  1984年   3篇
  1983年   4篇
  1982年   2篇
  1981年   3篇
  1980年   3篇
  1978年   3篇
  1977年   1篇
  1976年   1篇
排序方式: 共有881条查询结果,搜索用时 15 毫秒
1.
基于固定比例生产技术和多产品随机需求的情形,研究了联产品制造商的两阶段产量和价格联合优化模型。通过反向倒推的优化求解,得到了联产品制造商的最优产量和价格决策以及变化规律。同时研究了需求服从均匀分布时,需求波动对均衡的影响。研究结果表明,在订货成本较低和一种产品的需求波动性较大时,当另外一种产品的波动增大,则该产品的最优订货量增大,同时价格下降。借助数值仿真,分析了价格敏感度和产出比例对最优决策和利润的影响。结果表明,在给定一种产品的价格敏感度时,另外一种的价格敏感度越大,那么该产品的价格就越低,制造商的订货量就下降。当一种产品的产出比例固定时,另一种产品的产出比例上升时,则该产品的价格下降,制造商的订货量下降,总利润上升。  相似文献   
2.
严文龙等 《统计研究》2020,37(7):93-103
在经济下行压力加大、资本市场进一步开放的新形势下,厘清审计市场交易——监管机制,完善审计服务市场尤为必要。借由2010 年审计定价管制政策失效的自然实验,本文通过嵌入双边随机边界模型,得到审计双方的定价交易剩余指标,运用双重差分模型解析价格管制与交易定价的作用机制。研究发现,定价管制失效的原因不在于规制俘获,而在于价格管制与当前的市场效率不匹配。下限管制尽管能够提高审计师剩余,但同时会放大交易定价风险,增加剩余的错配,扰乱交易秩序。上限管制则进一步固化市场的低价竞争。进一步研究发现审计师剩余与盈余质量显著相关,2014年的放开定价管制政策提高了审计师剩余。研究厘清了审计市场交易机制,有利于未来研究审计交易机制的微观影响及与盈余质量的关联,为在新时代把握审计市场交易——监管规律、培育自发良性交易的审计市场提供有益借鉴。  相似文献   
3.
Many recent papers have used semiparametric methods, especially the log-periodogram regression, to detect and estimate long memory in the volatility of asset returns. In these papers, the volatility is proxied by measures such as squared, log-squared, and absolute returns. While the evidence for the existence of long memory is strong using any of these measures, the actual long memory parameter estimates can be sensitive to which measure is used. In Monte-Carlo simulations, I find that if the data is conditionally leptokurtic, the log-periodogram regression estimator using squared returns has a large downward bias, which is avoided by using other volatility measures. In United States stock return data, I find that squared returns give much lower estimates of the long memory parameter than the alternative volatility measures, which is consistent with the simulation results. I conclude that researchers should avoid using the squared returns in the semiparametric estimation of long memory volatility dependencies.  相似文献   
4.
Empirical applications of poverty measurement often have to deal with a stochastic weighting variable such as household size. Within the framework of a bivariate distribution function defined over income and weight, I derive the limiting distributions of the decomposable poverty measures and of the ordinates of stochastic dominance curves. The poverty line is allowed to depend on the income distribution. It is shown how the results can be used to test hypotheses concerning changes in poverty. The inference procedures are briefly illustrated using Belgian data. An erratum to this article can be found at  相似文献   
5.
Summary.  A stochastic discrete time version of the susceptible–infected–recovered model for infectious diseases is developed. Disease is transmitted within and between communities when infected and susceptible individuals interact. Markov chain Monte Carlo methods are used to make inference about these unobserved populations and the unknown parameters of interest. The algorithm is designed specifically for modelling time series of reported measles cases although it can be adapted for other infectious diseases with permanent immunity. The application to observed measles incidence series motivates extensions to incorporate age structure as well as spatial epidemic coupling between communities.  相似文献   
6.
For two-parameter exponential populations with the same scale parameter (known or unknown) comparisons are made between the location parameters. This is done by constructing confidence intervals, which can then be used for selection procedures. Comparisons are made with a control, and with the (unknown) “best” or “worst” population. Emphasis is laid on finding approximations to the confidence so that calculations are simple and tables are not necessary. (Since we consider unequal sample sizes, tables for exact values would need to be extensive.)  相似文献   
7.
We discuss the issue of using benchmark doses for quantifying (excess) risk associated with exposure to environmental hazards. The paradigm of low-dose risk estimation in dose-response modeling is used as the primary application scenario. Emphasis is placed on making simultaneous inferences on benchmark doses when data are in the form of proportions, although the concepts translate easily to other forms of outcome data.  相似文献   
8.
The authors examine the effect of premarital cohabitation on the division of household labor in 22 countries. First, women do more routine housework than men in all countries. Second, married couples that cohabited before marriage have a more equal division of housework. Third, national cohabitation rates have equalizing effects on couples regardless of their own cohabitation experience. However, the influence of cohabitation rates is only observed in countries with higher levels of overall gender equality. The authors conclude that the trend toward increasing cohabitation may be part of a broader social trend toward a more egalitarian division of housework.  相似文献   
9.
《Econometric Reviews》2008,27(1):268-297
Nonlinear functions of multivariate financial time series can exhibit long memory and fractional cointegration. However, tools for analysing these phenomena have principally been justified under assumptions that are invalid in this setting. Determination of asymptotic theory under more plausible assumptions can be complicated and lengthy. We discuss these issues and present a Monte Carlo study, showing that asymptotic theory should not necessarily be expected to provide a good approximation to finite-sample behavior.  相似文献   
10.
The elimination or knockout format is one of the most common designs for pairing competitors in tournaments and leagues. In each round of a knockout tournament, the losers are eliminated while the winners advance to the next round. Typically, the goal of such a design is to identify the overall best player. Using a common probability model for expressing relative player strengths, we develop an adaptive approach to pairing players each round in which the probability that the best player advances to the next round is maximized. We evaluate our method using simulated game outcomes under several data-generating mechanisms, and compare it to random pairings, to the standard knockout format, and to two variants of the standard format.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号