首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   150篇
  免费   4篇
管理学   12篇
人口学   1篇
丛书文集   6篇
理论方法论   1篇
综合类   34篇
社会学   1篇
统计学   99篇
  2021年   1篇
  2020年   3篇
  2019年   4篇
  2018年   5篇
  2017年   16篇
  2015年   3篇
  2014年   8篇
  2013年   46篇
  2012年   5篇
  2011年   9篇
  2010年   6篇
  2009年   9篇
  2008年   7篇
  2007年   6篇
  2006年   4篇
  2005年   6篇
  2004年   2篇
  2003年   2篇
  2002年   2篇
  2001年   1篇
  2000年   2篇
  1995年   1篇
  1991年   1篇
  1990年   1篇
  1989年   1篇
  1985年   1篇
  1981年   1篇
  1978年   1篇
排序方式: 共有154条查询结果,搜索用时 15 毫秒
1.
We propose new dynamic measures of uncertainty based on the notion of generalized dynamic entropy introduced in Di Crescenzo and Longobardi (2006). These can uniquely determine distribution functions in continuous and discrete cases, and the characterizations of some well-known distributions are provided. We also define some orderings and aging notions based on the generalized dynamic measures, and prove some of their properties, obtaining as corollaries results that have recently appeared in the literature.  相似文献   
2.
Identical numerical integration experiments are performed on a CYBER 205 and an IBM 3081 in order to gauge the relative performance of several methods of integration. The methods employed are the general methods of Gauss-Legendre, iterated Gauss-Legendre, Newton-Cotes, Romberg and Monte Carlo as well as three methods, due to Owen, Dutt, and Clark respectively, for integrating the normal density. The bi- and trivariate normal densities and four other functions are integrated; the latter four have integrals expressible in closed form and some of them can be parameterized to exhibit singularities or highly periodic behavior. The various Gauss-Legendre methods tend to be most accurate (when applied to the normal density they are even more accurate than the special purpose methods designed for the normal) and while they are not the fastest, they are at least competitive. In scalar mode the CYBER is about 2-6 times faster than the IBM 3081 and the speed advantage of vectorised to scalar mode ranges from 6 to 15. Large scale econometric problems of the probit type should now be routinely soluble.  相似文献   
3.
Networks of ambient monitoring stations are used to monitor environmental pollution fields such as those for acid rain and air pollution. Such stations provide regular measurements of pollutant concentrations. The networks are established for a variety of purposes at various times so often several stations measuring different subsets of pollutant concentrations can be found in compact geographical regions. The problem of statistically combining these disparate information sources into a single 'network' then arises. Capitalizing on the efficiencies so achieved can then lead to the secondary problem of extending this network. The subject of this paper is a set of 31 air pollution monitoring stations in southern Ontario. Each of these regularly measures a particular subset of ionic sulphate, sulphite, nitrite and ozone. However, this subset varies from station to station. For example only two stations measure all four. Some measure just one. We describe a Bayesian framework for integrating the measurements of these stations to yield a spatial predictive distribution for unmonitored sites and unmeasured concentrations at existing stations. Furthermore we show how this network can be extended by using an entropy maximization criterion. The methods assume that the multivariate response field being measured has a joint Gaussian distribution conditional on its mean and covariance function. A conjugate prior is used for these parameters, some of its hyperparameters being fitted empirically.  相似文献   
4.
This article introduces a five-parameter lifetime model called the McDonald Gompertz (McG) distribution to extend the Gompertz, generalized Gompertz, generalized exponential, beta Gompertz, and Kumaraswamy Gompertz distributions among several other models. The hazard function of new distribution can be increasing, decreasing, upside-down bathtub, and bathtub shaped. We obtain several properties of the McG distribution including moments, entropies, quantile, and generating functions. We provide the density function of the order statistics and their moments. The parameter estimation is based on the usual maximum likelihood approach. We also provide the observed information matrix and discuss inferences issues. The flexibility and usefulness of the new distribution are illustrated by means of application to two real datasets.  相似文献   
5.
In this article, a new consistent estimator of Veram’s entropy is introduced. We establish the entropy test based on the new information namely Verma Kullback–Leibler discrimination methodology. The results are used to introduce goodness-of-fit tests for normal and exponential distributions. The root of mean square errors, critical values, and powers for some alternatives are obtained by simulation. The proposed test is compared with other tests.  相似文献   
6.
In addition to his contributions to biostatistics and clinical trials, Paul Meier had a long-term interest in the legal applications of statistics. As part of this, he had extensive experience as a statistical consultant. Legal consulting can be a minefield, but as a result of his background, Paul had excellent advice to give to those starting out on how to function successfully in this environment.  相似文献   
7.
The Kulback-Leibler information has been considered for establishing goodness-of-fit test statistics, which have been shown to perform very well (Arizono & Ohta, 1989; Ebrahimi et al., 1992, etc). In this paper, we propose censored Kullback-Leibler information to generalize the discussion of the Kullback-Leibler information to the censored case. Then we establish a goodness-of-fit test statistic based on the censored Kullback-Leibler information with the type 2 censored data, and compare the test statistics with some existing test statistics for the exponential and normal distributions.  相似文献   
8.
This paper considers decision problems where: (1) The exact probability distribution over the states of nature is not precisely known, but certain prior information is available about the possibilities of these outcomes; (2) A prior distribution over the states of nature is known, but new constraint information about the probabilities becomes available. The maximum entropy principle asserts that the probability distribution with maximum entropy, satisfying the prior knowledge, should be used in the decision problem. The minimum cross-entropy principle says that the posterior distribution is the one which minimizes cross-entropy, subject to the new constraint information. The entropy principles have not gone uncriticized, and this literature, together with that justifying the principles, is surveyed. Both principles are illustrated in a number of situations where the distribution is either discrete or continuous. The discrete distribution case with prior interval estimates based on expert opinions is considered in detail.  相似文献   
9.
This essay intends to define the role of entropy, in particular, the role of the maximum entropy criterion with respect to decision analysis and information economics. By considering the average opportunity loss interpretation, the basic hypothesis for Shannon's derivation can be derived from properties of decision problems. Using the representation Bayes Boundary it is possible to show that selecting a single probability from a set by the Maximum Entropy Criterion corresponds to a minimax criterion for decision-making. Since problems of randomly accessing and storing information as well as communicating information can often be stated in terms of coding problems, this result might be used to develop strategies for minimizing retrieval time or communication costs.  相似文献   
10.
美国后现代主义代表作家托马斯.品钦擅长将自然科学和人文科学巧妙融合,他在小说《熵》中把物理的第二热力定律运用到对人类世界的观测比喻,隐喻支离破碎的后现代文化和混乱无序的后现代社会,诠释秩序的不复存在,生动地描绘了世俗凡人盲目、烦躁、错位的生活状况和茫然、虚幻的追求,指出人类能量破坏性的释放,将最终导致熵的增加,隐喻后现代文化热寂的可怕前景。  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号