首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   86篇
  免费   2篇
管理学   33篇
丛书文集   1篇
综合类   2篇
社会学   2篇
统计学   50篇
  2019年   1篇
  2018年   3篇
  2017年   7篇
  2016年   2篇
  2014年   1篇
  2013年   25篇
  2012年   6篇
  2011年   1篇
  2010年   3篇
  2009年   5篇
  2008年   10篇
  2007年   6篇
  2006年   7篇
  2004年   1篇
  2002年   1篇
  2001年   1篇
  1998年   1篇
  1996年   1篇
  1994年   1篇
  1992年   1篇
  1991年   1篇
  1989年   1篇
  1979年   1篇
  1978年   1篇
排序方式: 共有88条查询结果,搜索用时 187 毫秒
1.
《Risk analysis》2018,38(8):1576-1584
Fault trees are used in reliability modeling to create logical models of fault combinations that can lead to undesirable events. The output of a fault tree analysis (the top event probability) is expressed in terms of the failure probabilities of basic events that are input to the model. Typically, the basic event probabilities are not known exactly, but are modeled as probability distributions: therefore, the top event probability is also represented as an uncertainty distribution. Monte Carlo methods are generally used for evaluating the uncertainty distribution, but such calculations are computationally intensive and do not readily reveal the dominant contributors to the uncertainty. In this article, a closed‐form approximation for the fault tree top event uncertainty distribution is developed, which is applicable when the uncertainties in the basic events of the model are lognormally distributed. The results of the approximate method are compared with results from two sampling‐based methods: namely, the Monte Carlo method and the Wilks method based on order statistics. It is shown that the closed‐form expression can provide a reasonable approximation to results obtained by Monte Carlo sampling, without incurring the computational expense. The Wilks method is found to be a useful means of providing an upper bound for the percentiles of the uncertainty distribution while being computationally inexpensive compared with full Monte Carlo sampling. The lognormal approximation method and Wilks’s method appear attractive, practical alternatives for the evaluation of uncertainty in the output of fault trees and similar multilinear models.  相似文献   
2.
In this paper the researchers are presenting an upper bound for the distribution function of quadratic forms in normal vector with mean zero and positive definite covariance matrix. They also will show that the new upper bound is more precise than the one introduced by Okamoto [4] and the one introduced by Siddiqui [5]. Theoretical Error bounds for both, the new and Okamoto upper bounds are derived in this paper. For larger number of terms in any given positive definite quadratic form, a rough and easier upper bound is suggested.  相似文献   
3.
4.
We propose a simple and efficient way to approximate multivariate normal probabilities using univariate and bivariate probabilities. The approximation is computationally tested for the trivariate and quadrivariate normal probabilities. A few problems of higher dimensions were also tested.  相似文献   
5.
本文综合运用心理学、哲学、经济学、教育学的成果,以中国大学本科教育为例,对劳动复杂程度的模糊度量问题进行了探讨.认为专业劳动的复杂度由专业学习难度决定,专业学习难度由其横断学科和经验知识含量决定,而这些含量由专家系统提供的主要课程设置来体现.并由此建立了专业学科难度模糊谱系和相应的专业劳动复杂度模糊谱系.  相似文献   
6.
Sequential designs can be used to save computation time in implementing Monte Carlo hypothesis tests. The motivation is to stop resampling if the early resamples provide enough information on the significance of the p-value of the original Monte Carlo test. In this paper, we consider a sequential design called the B-value design proposed by Lan and Wittes and construct the sequential design bounding the resampling risk, the probability that the accept/reject decision is different from the decision from complete enumeration. For the B-value design whose exact implementation can be done by using the algorithm proposed in Fay, Kim and Hachey, we first compare the expected resample size for different designs with comparable resampling risk. We show that the B-value design has considerable savings in expected resample size compared to a fixed resample or simple curtailed design, and comparable expected resample size to the iterative push out design of Fay and Follmann. The B-value design is more practical than the iterative push out design in that it is tractable even for small values of resampling risk, which was a challenge with the iterative push out design. We also propose an approximate B-value design that can be constructed without using a specially developed software and provides analytic insights on the choice of parameter values in constructing the exact B-value design.  相似文献   
7.
The author proposes a new method for flexible regression modeling of multi‐dimensional data, where the regression function is approximated by a linear combination of logistic basis functions. The method is adaptive, selecting simple or more complex models as appropriate. The number, location, and (to some extent) shape of the basis functions are automatically determined from the data. The method is also affine invariant, so accuracy of the fit is not affected by rotation or scaling of the covariates. Squared error and absolute error criteria are both available for estimation. The latter provides a robust estimator of the conditional median function. Computation is relatively fast, particularly for large data sets, so the method is well suited for data mining applications.  相似文献   
8.
Recent developments in higher-order asymptotic theory for statistical inference have emphasized the fundamental role of the likelihood function in providing accurate approximations to cumulative distribution functions. This paper summarizes the main results, with an emphasis on classes of problems for which relatively easily implemented solutions exist. A survey of the literature indicates the large number of problems solved and solvable by this method. Generalizations and extensions, with suggestions for further development, are considered.  相似文献   
9.
We propose a novel methodology for evaluating the accuracy of numerical solutions to dynamic economic models. It consists in constructing a lower bound on the size of approximation errors. A small lower bound on errors is a necessary condition for accuracy: If a lower error bound is unacceptably large, then the actual approximation errors are even larger, and hence, the approximation is inaccurate. Our lower‐bound error analysis is complementary to the conventional upper‐error (worst‐case) bound analysis, which provides a sufficient condition for accuracy. As an illustration of our methodology, we assess approximation in the first‐ and second‐order perturbation solutions for two stylized models: a neoclassical growth model and a new Keynesian model. The errors are small for the former model but unacceptably large for the latter model under some empirically relevant parameterizations.  相似文献   
10.
A stochastic approximation procedure of the Robbins-Monro type is considered. The original idea behind the Newton-Raphson method is used as follows. Given n approximations X1,…, Xn with observations Y1,…, Yn, a least squares line is fitted to the points (Xm, Ym),…, (Xn, Yn) where m<n may depend on n. The (n+1)st approximation is taken to be the intersection of the least squares line with y=0. A variation of the resulting process is studied. It is shown that this process yields a strongly consistent sequence of estimates which is asymptotically normal with minimal asymptotic variance.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号