首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   1718篇
  免费   51篇
  国内免费   5篇
管理学   138篇
人口学   1篇
丛书文集   21篇
理论方法论   12篇
综合类   182篇
社会学   23篇
统计学   1397篇
  2024年   1篇
  2023年   18篇
  2022年   25篇
  2021年   21篇
  2020年   31篇
  2019年   58篇
  2018年   78篇
  2017年   103篇
  2016年   65篇
  2015年   37篇
  2014年   53篇
  2013年   364篇
  2012年   128篇
  2011年   65篇
  2010年   55篇
  2009年   57篇
  2008年   59篇
  2007年   58篇
  2006年   52篇
  2005年   59篇
  2004年   52篇
  2003年   45篇
  2002年   39篇
  2001年   39篇
  2000年   33篇
  1999年   22篇
  1998年   24篇
  1997年   23篇
  1996年   13篇
  1995年   16篇
  1994年   8篇
  1993年   9篇
  1992年   8篇
  1991年   10篇
  1990年   5篇
  1989年   2篇
  1988年   5篇
  1987年   7篇
  1986年   2篇
  1985年   3篇
  1984年   2篇
  1983年   5篇
  1982年   6篇
  1981年   3篇
  1980年   2篇
  1979年   1篇
  1978年   1篇
  1977年   2篇
排序方式: 共有1774条查询结果,搜索用时 15 毫秒
51.
在经典报童模型下考虑供应和需求不确定性,研究了具有风险厌恶的零售商库存优化问题。采用条件风险值(CVaR)对库存绩效进行度量,构建了基于CVaR的零售商库存运作模型;在此基础上,考虑上游供应商供货能力和下游市场需求不确定性,并采用一系列未知概率的离散情景进行描述,给出了供需不确定条件下基于CVaR的零售商库存鲁棒优化模型。进一步,采用区间不确定集对未知情景概率进行建模,给出了基于最大最小准则的鲁棒对应模型。针对同时考虑供需不确定性导致的模型非凸性,采用标准对偶理论将其转化为易于求解的数学规划问题。最后,通过数值计算分析了不同风险厌恶程度和不确定性程度对零售商库存决策以及库存绩效的影响。结果表明,供需不确定性的存在虽然会导致零售商库存绩效损失,但损失值较小。特别地,依据文中模型得到的鲁棒库存策略在多数情况下能够保证零售商获得更优的库存绩效。此外,不确定性和风险厌恶程度的增加虽然会影响零售商库存决策和运作绩效,但在同等风险厌恶态度下,随着不确定性程度的增加,基于文中方法得到的鲁棒库存策略仍能确保零售商获得理想的库存绩效,表明文中所建模型在应对供需不确定性方面具有良好的鲁棒性。  相似文献   
52.
面对资本市场风险加剧的现实背景,以"公司经营业绩与股票市场业绩一致趋优"为稳健型投资的核心要素,立足于区间数据表示、会计信息度量两个关键要素,开展稳健型股票价值投资的多准则决策建模研究。面向稳健型投资决策目标,提出满足"稳健性""局部性""全局性"3个特性的序化机理,围绕关键特征选择、特征评价、全序化建模的主体脉络建立系统性多准则决策方法,进而构建"稳健型股票价值投资决策"的研究框架。  相似文献   
53.
Bayesian methods are increasingly used in proof‐of‐concept studies. An important benefit of these methods is the potential to use informative priors, thereby reducing sample size. This is particularly relevant for treatment arms where there is a substantial amount of historical information such as placebo and active comparators. One issue with using an informative prior is the possibility of a mismatch between the informative prior and the observed data, referred to as prior‐data conflict. We focus on two methods for dealing with this: a testing approach and a mixture prior approach. The testing approach assesses prior‐data conflict by comparing the observed data to the prior predictive distribution and resorting to a non‐informative prior if prior‐data conflict is declared. The mixture prior approach uses a prior with a precise and diffuse component. We assess these approaches for the normal case via simulation and show they have some attractive features as compared with the standard one‐component informative prior. For example, when the discrepancy between the prior and the data is sufficiently marked, and intuitively, one feels less certain about the results, both the testing and mixture approaches typically yield wider posterior‐credible intervals than when there is no discrepancy. In contrast, when there is no discrepancy, the results of these approaches are typically similar to the standard approach. Whilst for any specific study, the operating characteristics of any selected approach should be assessed and agreed at the design stage; we believe these two approaches are each worthy of consideration. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   
54.
55.
Linear increments (LI) are used to analyse repeated outcome data with missing values. Previously, two LI methods have been proposed, one allowing non‐monotone missingness but not independent measurement error and one allowing independent measurement error but only monotone missingness. In both, it was suggested that the expected increment could depend on current outcome. We show that LI can allow non‐monotone missingness and either independent measurement error of unknown variance or dependence of expected increment on current outcome but not both. A popular alternative to LI is a multivariate normal model ignoring the missingness pattern. This gives consistent estimation when data are normally distributed and missing at random (MAR). We clarify the relation between MAR and the assumptions of LI and show that for continuous outcomes multivariate normal estimators are also consistent under (non‐MAR and non‐normal) assumptions not much stronger than those of LI. Moreover, when missingness is non‐monotone, they are typically more efficient.  相似文献   
56.
We consider confidence intervals for the stress–strength reliability Pr(X< Y) in the two-parameter exponential distribution. We have derived the Bayesian highest posterior density interval using non-informative prior distributions. We have compared its performance with the intervals based on the generalized pivot variable intervals in terms of their coverage probabilities and expected lengths. Our simulation study shows that the Bayesian interval performs better according to the criteria used, especially when the sample sizes are very small. An example is given.  相似文献   
57.
ABSTRACT

In incident cohort studies, survival data often include subjects who have had an initiate event at recruitment and may potentially experience two successive events (first and second) during the follow-up period. When disease registries or surveillance systems collect data based on incidence occurring within a specific calendar time interval, the initial event is usually subject to double truncation. Furthermore, since the second duration process is observable only if the first event has occurred, double truncation and dependent censoring arise. In this article, under the two sampling biases with an unspecified distribution of truncation variables, we propose a nonparametric estimator of the joint survival function of two successive duration times using the inverse-probability-weighted (IPW) approach. The consistency of the proposed estimator is established. Based on the estimated marginal survival functions, we also propose a two-stage estimation procedure for estimating the parameters of copula model. The bootstrap method is used to construct confidence interval. Numerical studies demonstrate that the proposed estimation approaches perform well with moderate sample sizes.  相似文献   
58.
59.
Variable selection in elliptical Linear Mixed Models (LMMs) with a shrinkage penalty function (SPF) is the main scope of this study. SPFs are applied for parameter estimation and variable selection simultaneously. The smoothly clipped absolute deviation penalty (SCAD) is one of the SPFs and it is adapted into the elliptical LMM in this study. The proposed idea is highly applicable to a variety of models which are set up with different distributions such as normal, student-t, Pearson VII, power exponential and so on. Simulation studies and real data example with one of the elliptical distributions show that if the variable selection is also a concern, it is worthwhile to carry on the variable selection and the parameter estimation simultaneously in the elliptical LMM.  相似文献   
60.
Assignment of individuals to correct species or population of origin based on a comparison of allele profiles has in recent years become more accurate due to improvements in DNA marker technology. A method of assessing the error in such assignment problems is présentés. The method is based on the exact hypergeometric distributions of contingency tables conditioned on marginal totals. The result is a confidence region of fixed confidence level. This confidence level is calculable exactly in principle, and estimable very accurately by simulation, without knowledge of the true population allele frequencies. Various properties of these techniques are examined through application to several examples of actual DNA marker data and through simulation studies. Methods which may reduce computation time are discussed and illustrated.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号