首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   87篇
  免费   2篇
管理学   33篇
丛书文集   1篇
综合类   2篇
社会学   2篇
统计学   51篇
  2019年   1篇
  2018年   3篇
  2017年   7篇
  2016年   2篇
  2014年   1篇
  2013年   25篇
  2012年   7篇
  2011年   1篇
  2010年   3篇
  2009年   5篇
  2008年   10篇
  2007年   6篇
  2006年   7篇
  2004年   1篇
  2002年   1篇
  2001年   1篇
  1998年   1篇
  1996年   1篇
  1994年   1篇
  1992年   1篇
  1991年   1篇
  1989年   1篇
  1979年   1篇
  1978年   1篇
排序方式: 共有89条查询结果,搜索用时 9 毫秒
11.
We propose a novel methodology for evaluating the accuracy of numerical solutions to dynamic economic models. It consists in constructing a lower bound on the size of approximation errors. A small lower bound on errors is a necessary condition for accuracy: If a lower error bound is unacceptably large, then the actual approximation errors are even larger, and hence, the approximation is inaccurate. Our lower‐bound error analysis is complementary to the conventional upper‐error (worst‐case) bound analysis, which provides a sufficient condition for accuracy. As an illustration of our methodology, we assess approximation in the first‐ and second‐order perturbation solutions for two stylized models: a neoclassical growth model and a new Keynesian model. The errors are small for the former model but unacceptably large for the latter model under some empirically relevant parameterizations.  相似文献   
12.
《Risk analysis》2018,38(8):1576-1584
Fault trees are used in reliability modeling to create logical models of fault combinations that can lead to undesirable events. The output of a fault tree analysis (the top event probability) is expressed in terms of the failure probabilities of basic events that are input to the model. Typically, the basic event probabilities are not known exactly, but are modeled as probability distributions: therefore, the top event probability is also represented as an uncertainty distribution. Monte Carlo methods are generally used for evaluating the uncertainty distribution, but such calculations are computationally intensive and do not readily reveal the dominant contributors to the uncertainty. In this article, a closed‐form approximation for the fault tree top event uncertainty distribution is developed, which is applicable when the uncertainties in the basic events of the model are lognormally distributed. The results of the approximate method are compared with results from two sampling‐based methods: namely, the Monte Carlo method and the Wilks method based on order statistics. It is shown that the closed‐form expression can provide a reasonable approximation to results obtained by Monte Carlo sampling, without incurring the computational expense. The Wilks method is found to be a useful means of providing an upper bound for the percentiles of the uncertainty distribution while being computationally inexpensive compared with full Monte Carlo sampling. The lognormal approximation method and Wilks’s method appear attractive, practical alternatives for the evaluation of uncertainty in the output of fault trees and similar multilinear models.  相似文献   
13.
刘田  谈进 《统计研究》2011,28(4):99-105
 传统单位根检验方法常常假设带有线性的确定性趋势,但如果趋势是非线性的,通常将因为检验功效大幅下降而导致检验失败。本文研究用正交多项式逼近非线性趋势,然后对残差进行单位根检验的方法。研究了用正交多项式进行趋势逼近的性质,推导了这种方法进行单位根检验时统计量的极限分布,提出了正交多项式最高阶数的确定方法,仿真研究了残差相关与不相关时的检验功效。结果表明,检验方法是有效的。  相似文献   
14.
We propose a Monte Carlo sampling algorithm for estimating guananteed-coverage tolerance factors for non-normal continuous distributions with known shape but u n p w n location and scale. The algorithm is based on reformulating this root-finding problem as a quantile-estimation problem. The reformulation leads to a geometrical interpretation of the tolerance-interval factor. For arbitrary distribution shapes, we analytically and empirically investigate various relationships among tolerance- interval coverage, confidence, and sample size.  相似文献   
15.
In this paper we provide a comprehensive Bayesian posterior analysis of trend determination in general autoregressive models. Multiple lag autoregressive models with fitted drifts and time trends as well as models that allow for certain types of structural change in the deterministic components are considered. We utilize a modified information matrix-based prior that accommodates stochastic nonstationarity, takes into account the interactions between long-run and short-run dynamics and controls the degree of stochastic nonstationarity permitted. We derive analytic posterior densities for all of the trend determining parameters via the Laplace approximation to multivariate integrals. We also address the sampling properties of our posteriors under alternative data generating processes by simulation methods. We apply our Bayesian techniques to the Nelson-Plosser macroeconomic data and various stock price and dividend data. Contrary to DeJong and Whiteman (1989a,b,c), we do not find that the data overwhelmingly favor the existence of deterministic trends over stochastic trends. In addition, we find evidence supporting Perron's (1989) view that some of the Nelson and Plosser data are best construed as trend stationary with a change in the trend function occurring at 1929.  相似文献   
16.
Results of the Monte Carlo study of the performance of a maximum likelihood estimation in a Weibull parametric regression model with two explanatory variables are presented. One simulation run contained 1000 samples censored on the average by the amount of 0-30%. Each simulatedsample was generated in a form of two-factor two-level balanced experiment. The confidence intervals were computed using the large-sample normal approximation via the matrix of observed information. For small sample sizes the estimates of the scale parameter b of the loglifetime were significantly negatively biased, which resulted in a poor quality of confidence intervals for b and the low-level quantiles. All estimators improved their quality when the nominal value of b decreased. A moderate amount of censoring improved the quality of point and confidence estimation. The reparametrization b 7 produced rather accurate confidence intervals. Exact confidence intervals for b in case of non-censoring were obtained using the pivotal quantity b/b.  相似文献   
17.
本文综合运用心理学、哲学、经济学、教育学的成果,以中国大学本科教育为例,对劳动复杂程度的模糊度量问题进行了探讨.认为专业劳动的复杂度由专业学习难度决定,专业学习难度由其横断学科和经验知识含量决定,而这些含量由专家系统提供的主要课程设置来体现.并由此建立了专业学科难度模糊谱系和相应的专业劳动复杂度模糊谱系.  相似文献   
18.
This paper provides the percentiles obtained by simulation of an informational test statistic. It gives evidence that the widely used chi-square approximation to this test statistic is not suitable.  相似文献   
19.
20.
In this paper the researchers are presenting an upper bound for the distribution function of quadratic forms in normal vector with mean zero and positive definite covariance matrix. They also will show that the new upper bound is more precise than the one introduced by Okamoto [4] and the one introduced by Siddiqui [5]. Theoretical Error bounds for both, the new and Okamoto upper bounds are derived in this paper. For larger number of terms in any given positive definite quadratic form, a rough and easier upper bound is suggested.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号