首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   3435篇
  免费   93篇
  国内免费   40篇
管理学   602篇
民族学   1篇
人口学   14篇
丛书文集   39篇
理论方法论   16篇
综合类   822篇
社会学   30篇
统计学   2044篇
  2024年   7篇
  2023年   15篇
  2022年   44篇
  2021年   23篇
  2020年   50篇
  2019年   93篇
  2018年   102篇
  2017年   201篇
  2016年   106篇
  2015年   93篇
  2014年   124篇
  2013年   700篇
  2012年   232篇
  2011年   134篇
  2010年   115篇
  2009年   117篇
  2008年   159篇
  2007年   143篇
  2006年   133篇
  2005年   126篇
  2004年   123篇
  2003年   81篇
  2002年   80篇
  2001年   69篇
  2000年   76篇
  1999年   81篇
  1998年   58篇
  1997年   48篇
  1996年   40篇
  1995年   29篇
  1994年   23篇
  1993年   19篇
  1992年   35篇
  1991年   17篇
  1990年   8篇
  1989年   12篇
  1988年   12篇
  1987年   6篇
  1986年   3篇
  1985年   5篇
  1984年   6篇
  1983年   4篇
  1981年   8篇
  1980年   3篇
  1978年   3篇
  1977年   1篇
  1976年   1篇
排序方式: 共有3568条查询结果,搜索用时 15 毫秒
1.
基于固定比例生产技术和多产品随机需求的情形,研究了联产品制造商的两阶段产量和价格联合优化模型。通过反向倒推的优化求解,得到了联产品制造商的最优产量和价格决策以及变化规律。同时研究了需求服从均匀分布时,需求波动对均衡的影响。研究结果表明,在订货成本较低和一种产品的需求波动性较大时,当另外一种产品的波动增大,则该产品的最优订货量增大,同时价格下降。借助数值仿真,分析了价格敏感度和产出比例对最优决策和利润的影响。结果表明,在给定一种产品的价格敏感度时,另外一种的价格敏感度越大,那么该产品的价格就越低,制造商的订货量就下降。当一种产品的产出比例固定时,另一种产品的产出比例上升时,则该产品的价格下降,制造商的订货量下降,总利润上升。  相似文献   
2.
The generalized half-normal (GHN) distribution and progressive type-II censoring are considered in this article for studying some statistical inferences of constant-stress accelerated life testing. The EM algorithm is considered to calculate the maximum likelihood estimates. Fisher information matrix is formed depending on the missing information law and it is utilized for structuring the asymptomatic confidence intervals. Further, interval estimation is discussed through bootstrap intervals. The Tierney and Kadane method, importance sampling procedure and Metropolis-Hastings algorithm are utilized to compute Bayesian estimates. Furthermore, predictive estimates for censored data and the related prediction intervals are obtained. We consider three optimality criteria to find out the optimal stress level. A real data set is used to illustrate the importance of GHN distribution as an alternative lifetime model for well-known distributions. Finally, a simulation study is provided with discussion.  相似文献   
3.
Random effects regression mixture models are a way to classify longitudinal data (or trajectories) having possibly varying lengths. The mixture structure of the traditional random effects regression mixture model arises through the distribution of the random regression coefficients, which is assumed to be a mixture of multivariate normals. An extension of this standard model is presented that accounts for various levels of heterogeneity among the trajectories, depending on their assumed error structure. A standard likelihood ratio test is presented for testing this error structure assumption. Full details of an expectation-conditional maximization algorithm for maximum likelihood estimation are also presented. This model is used to analyze data from an infant habituation experiment, where it is desirable to assess whether infants comprise different populations in terms of their habituation time.  相似文献   
4.
严文龙等 《统计研究》2020,37(7):93-103
在经济下行压力加大、资本市场进一步开放的新形势下,厘清审计市场交易——监管机制,完善审计服务市场尤为必要。借由2010 年审计定价管制政策失效的自然实验,本文通过嵌入双边随机边界模型,得到审计双方的定价交易剩余指标,运用双重差分模型解析价格管制与交易定价的作用机制。研究发现,定价管制失效的原因不在于规制俘获,而在于价格管制与当前的市场效率不匹配。下限管制尽管能够提高审计师剩余,但同时会放大交易定价风险,增加剩余的错配,扰乱交易秩序。上限管制则进一步固化市场的低价竞争。进一步研究发现审计师剩余与盈余质量显著相关,2014年的放开定价管制政策提高了审计师剩余。研究厘清了审计市场交易机制,有利于未来研究审计交易机制的微观影响及与盈余质量的关联,为在新时代把握审计市场交易——监管规律、培育自发良性交易的审计市场提供有益借鉴。  相似文献   
5.
While much used in practice, latent variable models raise challenging estimation problems due to the intractability of their likelihood. Monte Carlo maximum likelihood (MCML), as proposed by Geyer & Thompson (1992 ), is a simulation-based approach to maximum likelihood approximation applicable to general latent variable models. MCML can be described as an importance sampling method in which the likelihood ratio is approximated by Monte Carlo averages of importance ratios simulated from the complete data model corresponding to an arbitrary value of the unknown parameter. This paper studies the asymptotic (in the number of observations) performance of the MCML method in the case of latent variable models with independent observations. This is in contrast with previous works on the same topic which only considered conditional convergence to the maximum likelihood estimator, for a fixed set of observations. A first important result is that when is fixed, the MCML method can only be consistent if the number of simulations grows exponentially fast with the number of observations. If on the other hand, is obtained from a consistent sequence of estimates of the unknown parameter, then the requirements on the number of simulations are shown to be much weaker.  相似文献   
6.
Many recent papers have used semiparametric methods, especially the log-periodogram regression, to detect and estimate long memory in the volatility of asset returns. In these papers, the volatility is proxied by measures such as squared, log-squared, and absolute returns. While the evidence for the existence of long memory is strong using any of these measures, the actual long memory parameter estimates can be sensitive to which measure is used. In Monte-Carlo simulations, I find that if the data is conditionally leptokurtic, the log-periodogram regression estimator using squared returns has a large downward bias, which is avoided by using other volatility measures. In United States stock return data, I find that squared returns give much lower estimates of the long memory parameter than the alternative volatility measures, which is consistent with the simulation results. I conclude that researchers should avoid using the squared returns in the semiparametric estimation of long memory volatility dependencies.  相似文献   
7.
讨论了充分利用C4 0的硬件并行结构进行实数FFT的并行算法 ,并在地震勘探信号处理中得以应用  相似文献   
8.
Summary.  Generalized linear latent variable models (GLLVMs), as defined by Bartholomew and Knott, enable modelling of relationships between manifest and latent variables. They extend structural equation modelling techniques, which are powerful tools in the social sciences. However, because of the complexity of the log-likelihood function of a GLLVM, an approximation such as numerical integration must be used for inference. This can limit drastically the number of variables in the model and can lead to biased estimators. We propose a new estimator for the parameters of a GLLVM, based on a Laplace approximation to the likelihood function and which can be computed even for models with a large number of variables. The new estimator can be viewed as an M -estimator, leading to readily available asymptotic properties and correct inference. A simulation study shows its excellent finite sample properties, in particular when compared with a well-established approach such as LISREL. A real data example on the measurement of wealth for the computation of multidimensional inequality is analysed to highlight the importance of the methodology.  相似文献   
9.
Empirical applications of poverty measurement often have to deal with a stochastic weighting variable such as household size. Within the framework of a bivariate distribution function defined over income and weight, I derive the limiting distributions of the decomposable poverty measures and of the ordinates of stochastic dominance curves. The poverty line is allowed to depend on the income distribution. It is shown how the results can be used to test hypotheses concerning changes in poverty. The inference procedures are briefly illustrated using Belgian data. An erratum to this article can be found at  相似文献   
10.
Summary.  Non-ignorable missing data, a serious problem in both clinical trials and observational studies, can lead to biased inferences. Quality-of-life measures have become increasingly popular in clinical trials. However, these measures are often incompletely observed, and investigators may suspect that missing quality-of-life data are likely to be non-ignorable. Although several recent references have addressed missing covariates in survival analysis, they all required the assumption that missingness is at random or that all covariates are discrete. We present a method for estimating the parameters in the Cox proportional hazards model when missing covariates may be non-ignorable and continuous or discrete. Our method is useful in reducing the bias and improving efficiency in the presence of missing data. The methodology clearly specifies assumptions about the missing data mechanism and, through sensitivity analysis, helps investigators to understand the potential effect of missing data on study results.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号