首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   931篇
  免费   23篇
  国内免费   3篇
管理学   86篇
人口学   9篇
丛书文集   4篇
理论方法论   9篇
综合类   88篇
社会学   4篇
统计学   757篇
  2023年   5篇
  2022年   8篇
  2021年   10篇
  2020年   21篇
  2019年   31篇
  2018年   26篇
  2017年   71篇
  2016年   23篇
  2015年   33篇
  2014年   29篇
  2013年   290篇
  2012年   106篇
  2011年   38篇
  2010年   29篇
  2009年   32篇
  2008年   34篇
  2007年   15篇
  2006年   25篇
  2005年   23篇
  2004年   12篇
  2003年   17篇
  2002年   9篇
  2001年   12篇
  2000年   13篇
  1999年   10篇
  1998年   3篇
  1997年   1篇
  1996年   2篇
  1995年   7篇
  1994年   2篇
  1993年   3篇
  1992年   5篇
  1991年   1篇
  1990年   1篇
  1989年   1篇
  1987年   1篇
  1986年   1篇
  1982年   1篇
  1981年   4篇
  1978年   1篇
  1977年   1篇
排序方式: 共有957条查询结果,搜索用时 15 毫秒
31.
在许多领域中,Bootstrap成为一种数据处理的有效方法。很多情况下,模型中感兴趣的参数的置信区间难以构建,为了解决这一问题,文章提出了一个新的贝叶斯Bootstrap置信区间的估计量,并做了蒙特卡洛模拟比较,结果比经典区间估计方法和经典Bootstrap方法更优,并进行了实例分析。  相似文献   
32.
区间数排序的可能度计算模型是学术界一直在不断探索的基础性问题之一。区间数刻画着事物属性特征的取值范围,以往学术界都假设在区间内的取值服从均匀分布。本文将均匀分布推广到一般分布,运用概率论的方法,构建了一个新的区间数排序的可能度计算模型,由此修正了以往关于两个区间数全等的定义,提出了区间数形等的概念,同时进一步修正了可能度的自反性条件和区间数的综合排序方法,并将理论应用于多属性决策问题,给出了基本的决策过程,通过实例决策问题的计算,呈现了新理论和新方法的可行性和合理性,具有很好的推广应用价值。  相似文献   
33.
面对资本市场风险加剧的现实背景,以"公司经营业绩与股票市场业绩一致趋优"为稳健型投资的核心要素,立足于区间数据表示、会计信息度量两个关键要素,开展稳健型股票价值投资的多准则决策建模研究。面向稳健型投资决策目标,提出满足"稳健性""局部性""全局性"3个特性的序化机理,围绕关键特征选择、特征评价、全序化建模的主体脉络建立系统性多准则决策方法,进而构建"稳健型股票价值投资决策"的研究框架。  相似文献   
34.
在逆向工程中,截面曲线重构质量的高低决定了能否更好地反映物体的初始设计意图。高精度分段点的提取则 是提高重构质量的关键。针对圆弧与B样条曲线重构过程中,无法确定分段点所在区间问题,提出了圆弧与B样条线 性化处理方法,并结合基于数理统计原理的分段点区间确定办法和基于黄金分割法的重构方法,提取了圆弧与B样条间 的高精度分段点。实例证明该方法能在现存数据之间搜寻精度更高的分段点。该方法有效地解决了圆弧与B样条高精 度分段点提取问题。  相似文献   
35.
36.
This paper addresses the problems of frequentist and Bayesian estimation for the unknown parameters of generalized Lindley distribution based on lower record values. We first derive the exact explicit expressions for the single and product moments of lower record values, and then use these results to compute the means, variances and covariance between two lower record values. We next obtain the maximum likelihood estimators and associated asymptotic confidence intervals. Furthermore, we obtain Bayes estimators under the assumption of gamma priors on both the shape and the scale parameters of the generalized Lindley distribution, and associated the highest posterior density interval estimates. The Bayesian estimation is studied with respect to both symmetric (squared error) and asymmetric (linear-exponential (LINEX)) loss functions. Finally, we compute Bayesian predictive estimates and predictive interval estimates for the future record values. To illustrate the findings, one real data set is analyzed, and Monte Carlo simulations are performed to compare the performances of the proposed methods of estimation and prediction.  相似文献   
37.
We show that, in the context of double-bootstrap confidence intervals, linear interpolation at the second level of the double bootstrap can reduce the simulation error component of coverage error by an order of magnitude. Intervals that are indistinguishable in terms of coverage error with theoretical, infinite simulation, double-bootstrap confidence intervals may be obtained at substantially less computational expense than by using the standard Monte Carlo approximation method. The intervals retain the simplicity of uniform bootstrap sampling and require no special analysis or computational techniques. Interpolation at the first level of the double bootstrap is shown to have a relatively minor effect on the simulation error.  相似文献   
38.
In sequential studies, formal interim analyses are usually restricted to a consideration of a single null hypothesis concerning a single parameter of interest. Valid frequentist methods of hypothesis testing and of point and interval estimation for the primary parameter have already been devised for use at the end of such a study. However, the completed data set may warrant a more detailed analysis, involving the estimation of parameters corresponding to effects that were not used to determine when to stop, and yet correlated with those that were. This paper describes methods for setting confidence intervals for secondary parameters in a way which provides the correct coverage probability in repeated frequentist realizations of the sequential design used. The method assumes that information accumulates on the primary and secondary parameters at proportional rates. This requirement will be valid in many potential applications, but only in limited situations in survival analysis.  相似文献   
39.
Assessment of analytical similarity of tier 1 quality attributes is based on a set of hypotheses that tests the mean difference of reference and test products against a margin adjusted for standard deviation of the reference product. Thus, proper assessment of the biosimilarity hypothesis requires statistical tests that account for the uncertainty associated with the estimations of the mean differences and the standard deviation of the reference product. Recently, a linear reformulation of the biosimilarity hypothesis has been proposed, which facilitates development and implementation of statistical tests. These statistical tests account for the uncertainty in the estimation process of all the unknown parameters. In this paper, we survey methods for constructing confidence intervals for testing the linearized reformulation of the biosimilarity hypothesis and also compare the performance of the methods. We discuss test procedures using confidence intervals to make possible comparison among recently developed methods as well as other previously developed methods that have not been applied for demonstrating analytical similarity. A computer simulation study was conducted to compare the performance of the methods based on the ability to maintain the test size and power, as well as computational complexity. We demonstrate the methods using two example applications. At the end, we make recommendations concerning the use of the methods.  相似文献   
40.
In this paper, we discuss some theoretical results and properties of the discrete Weibull distribution, which was introduced by Nakagawa and Osaki [The discrete Weibull distribution. IEEE Trans Reliab. 1975;24:300–301]. We study the monotonicity of the probability mass, survival and hazard functions. Moreover, reliability, moments, p-quantiles, entropies and order statistics are also studied. We consider likelihood-based methods to estimate the model parameters based on complete and censored samples, and to derive confidence intervals. We also consider two additional methods to estimate the model parameters. The uniqueness of the maximum likelihood estimate of one of the parameters that index the discrete Weibull model is discussed. Numerical evaluation of the considered model is performed by Monte Carlo simulations. For illustrative purposes, two real data sets are analyzed.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号