首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
A compendium to information theory in economics and econometrics   总被引:5,自引:0,他引:5  
  相似文献   

2.
In this paper, we suggest an extension of the cumulative residual entropy (CRE) and call it generalized cumulative entropy. The proposed entropy not only retains attributes of the existing uncertainty measures but also possesses the absolute homogeneous property with unbounded support, which the CRE does not have. We demonstrate its mathematical properties including the entropy of order statistics and the principle of maximum general cumulative entropy. We also introduce the cumulative ratio information as a measure of discrepancy between two distributions and examine its application to a goodness-of-fit test of the logistic distribution. Simulation study shows that the test statistics based on the cumulative ratio information have comparable statistical power with competing test statistics.  相似文献   

3.
A technique is given for drawing valid inferences in cases where performance characteristics of statistical procedures (e.g. power for a test, or probability of a correct selection for a selection procedure) depend upon unknown parameters (e.g. an unknown variance). The technique is especially useful in situations where sample sizes are small (e.g. in many medical trials); the “usual” approximate procedures are found to be misleading in such cases.  相似文献   

4.
ABSTRACT

In response to growing concern about the reliability and reproducibility of published science, researchers have proposed adopting measures of “greater statistical stringency,” including suggestions to require larger sample sizes and to lower the highly criticized “p?<?0.05” significance threshold. While pros and cons are vigorously debated, there has been little to no modeling of how adopting these measures might affect what type of science is published. In this article, we develop a novel optimality model that, given current incentives to publish, predicts a researcher’s most rational use of resources in terms of the number of studies to undertake, the statistical power to devote to each study, and the desirable prestudy odds to pursue. We then develop a methodology that allows one to estimate the reliability of published research by considering a distribution of preferred research strategies. Using this approach, we investigate the merits of adopting measures of “greater statistical stringency” with the goal of informing the ongoing debate.  相似文献   

5.
Consider the case of classifying an incoming message as one of two known p-dimension signals or as a pure noise. Let the noise co-variance matrix (assumed to be same in all the three cases) be unknown. We consider the problem of estimation of “realized signal to noise ratio matrix”, which is an index of discriminatory power, under various loss functions. Optimum estimators are obtained under these loss functions. Finally, an attempt is made to provide a lower confidence bound for the realized signal to noise ratio matrix. In the process, the probability distribution of the smaller eigenvalue of a 2 × 2 confluent hypergeometric random matrix is obtained.  相似文献   

6.
信息技术生产率战略性评价   总被引:5,自引:0,他引:5       下载免费PDF全文
李小卯 《统计研究》2000,17(10):17-22
信息技术是高新技术的代表 ,是渗透力强、倍增效益高的最活跃的生产力。信息技术具有削减费用、倍增公司绩效、提高公司竞争能力的巨大潜力。但是 ,许多学者对信息技术生产率实证分析却不能显著支持这个结论 ,或不能拒绝“信息技术对总产出无贡献”的假设。 6 0年代美国劳动力生产率为 3% ,90年代劳动力生产率下降到 1 % ;相比较 ,同期美国IT投资却大幅度增加。根据美国其他经济指标的类似发展趋势 ,RobertSolow在《纽约时报》书评专栏发表了非常简单但引起激烈争论的著名论断 :Weseethecomputerageeve…  相似文献   

7.
Pre‐study sample size calculations for clinical trial research protocols are now mandatory. When an investigator is designing a study to compare the outcomes of an intervention, an essential step is the calculation of sample sizes that will allow a reasonable chance (power) of detecting a pre‐determined difference (effect size) in the outcome variable, at a given level of statistical significance. Frequently studies will recruit fewer patients than the initial pre‐study sample size calculation suggested. Investigators are faced with the fact that their study may be inadequately powered to detect the pre‐specified treatment effect and the statistical analysis of the collected outcome data may or may not report a statistically significant result. If the data produces a “non‐statistically significant result” then investigators are frequently tempted to ask the question “Given the actual final study size, what is the power of the study, now, to detect a treatment effect or difference?” The aim of this article is to debate whether or not it is desirable to answer this question and to undertake a power calculation, after the data have been collected and analysed. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

8.
Goodness-of-fit tests are proposed for unimodal densities and U-shaped hazards. The tests are based on maximum-product-of-spacings estimators, and incorporate unimodality or U-shapedness using order restrictions. A slightly improved “maximum violator” algorithm is given for computing the order-restricted estimates and test statistics. Modified spacings such as “k-spacings”, which may actually increase power, ensure computational feasibility when sample sizes are large. Simulations demonstrate that for samples of size less than twenty, the use of order restrictions can increase power, even with modified spacings. The proposed methods can be used as approximations in cases of null hypotheses that are specified only up to unknown parameters that are estimated.  相似文献   

9.
In this paper, the estimation of parameters for a generalized inverted exponential distribution based on the progressively first-failure type-II right-censored sample is studied. An expectation–maximization (EM) algorithm is developed to obtain maximum likelihood estimates of unknown parameters as well as reliability and hazard functions. Using the missing value principle, the Fisher information matrix has been obtained for constructing asymptotic confidence intervals. An exact interval and an exact confidence region for the parameters are also constructed. Bayesian procedures based on Markov Chain Monte Carlo methods have been developed to approximate the posterior distribution of the parameters of interest and in addition to deduce the corresponding credible intervals. The performances of the maximum likelihood and Bayes estimators are compared in terms of their mean-squared errors through the simulation study. Furthermore, Bayes two-sample point and interval predictors are obtained when the future sample is ordinary order statistics. The squared error, linear-exponential and general entropy loss functions have been considered for obtaining the Bayes estimators and predictors. To illustrate the discussed procedures, a set of real data is analyzed.  相似文献   

10.
Using divergence measures based on entropy functions, a procedure to test statistical hypotheses is proposed. Replacing the parameters by suitable estimators in the expresion of the divergence measure, the test statistics are obtained. Asymptotic distributions for these statistics are given in several cases when maximum likelihood estimators are considered, so they can be used to construct confidence intervals and to test statistical hypotheses based on one or more samples. These results can also be applied to multinomial populations. Tests of goodness of fit and tests of homogeneity can be constructed.  相似文献   

11.
12.
This article studies the maximum entropy spectrum estimation. After a bnei discussion on iiow co select ciic appropriate constraints and tiie objec¬tive functions, we decide to choose the constraints containing only the first four sample moments and, consequently, to employ the second order spectral entropy as the objective function. The resulting (maximum entropy) spectral estimate is the power spectral density of an ARMA sequence. Examples for comparing our proposal with the traditional maximum entropy spectral estimate follow at the end.  相似文献   

13.
Capacity utilization measures have traditionally been constructed as indexes of actual, as compared to “potential,” output. This potential or capacity output (Y*) can be represented within an economic model of the firm as the tangency between the short- and long-run average cost curves. Economic theoretical measures of capacity utilization (CU) can then be characterized as Y/Y* where Y is the realized level of output. These quantity or primal CU measures allow for economic interpretation; they provide explicit inference as to how changes in exogenous variables affect CU. Additional information for analyzing deviations from capacity production can be obtained by assessing the “dual” cost of the gap.

In this article the definitions and representations of primal-output and dual-cost CU measures are formalized within a dynamic model of a monopolistic firm. As an illustration of this approach to characterizing CU measures, a model is estimated for the U.S. automobile industry, 1959–1980, and primal and dual CU indexes are constructed. Application of these indexes to adjustment-of-productivity measures for “disequilibrium” is then carried out, using the dual-cost measure.  相似文献   

14.
ABSTRACT

In the literature of information theory, there exist many well known measures of entropy suitable for entropy optimization principles towards applications in different disciplines of science and technology. The object of this article is to develop a new generalized measure of entropy and to establish the relation between entropy and queueing theory. To fulfill our aim, we have made use of maximum entropy principle which provides the most uncertain probability distribution subject to some constraints expressed by mean values.  相似文献   

15.
ABSTRACT

We consider point and interval estimation of the unknown parameters of a generalized inverted exponential distribution in the presence of hybrid censoring. The maximum likelihood estimates are obtained using EM algorithm. We then compute Fisher information matrix using the missing value principle. Bayes estimates are derived under squared error and general entropy loss functions. Furthermore, approximate Bayes estimates are obtained using Tierney and Kadane method as well as using importance sampling approach. Asymptotic and highest posterior density intervals are also constructed. Proposed estimates are compared numerically using Monte Carlo simulations and a real data set is analyzed for illustrative purposes.  相似文献   

16.
It is often of interest to find the maximum or near maxima among a set of vector‐valued parameters in a statistical model; in the case of disease mapping, for example, these correspond to relative‐risk “hotspots” where public‐health intervention may be needed. The general problem is one of estimating nonlinear functions of the ensemble of relative risks, but biased estimates result if posterior means are simply substituted into these nonlinear functions. The authors obtain better estimates of extrema from a new, weighted ranks squared error loss function. The derivation of these Bayes estimators assumes a hidden‐Markov random‐field model for relative risks, and their behaviour is illustrated with real and simulated data.  相似文献   

17.
ABSTRACT

The cost and time of pharmaceutical drug development continue to grow at rates that many say are unsustainable. These trends have enormous impact on what treatments get to patients, when they get them and how they are used. The statistical framework for supporting decisions in regulated clinical development of new medicines has followed a traditional path of frequentist methodology. Trials using hypothesis tests of “no treatment effect” are done routinely, and the p-value < 0.05 is often the determinant of what constitutes a “successful” trial. Many drugs fail in clinical development, adding to the cost of new medicines, and some evidence points blame at the deficiencies of the frequentist paradigm. An unknown number effective medicines may have been abandoned because trials were declared “unsuccessful” due to a p-value exceeding 0.05. Recently, the Bayesian paradigm has shown utility in the clinical drug development process for its probability-based inference. We argue for a Bayesian approach that employs data from other trials as a “prior” for Phase 3 trials so that synthesized evidence across trials can be utilized to compute probability statements that are valuable for understanding the magnitude of treatment effect. Such a Bayesian paradigm provides a promising framework for improving statistical inference and regulatory decision making.  相似文献   

18.
This paper describes a computer program GTEST for designing group testing experiments for classifying each member of a population of items as “good” or “defective”. The outcome of a test on a group of items is either “negative” (if all items in the group are good) or “positive” (if at least one of the items is defective, but it is not known which). GTEST is based on a Bayesian approach. At each stage, it attempts to maximize (nearly) the expected reduction in the “entropy”, which is a quantitative measure of the amount of uncertainty about the state of the items. The user controls the procedure through specification of the prior probabilities of being defective, restrictions on the construction of the test group, and priorities that are assigned to the items. The nominal prior probabilities can be modified adaptively, to reduce the sensitivity of the procedure to the proportion of defectives in the population.  相似文献   

19.
Robust estimation of location vectors and scatter matrices is studied under the assumption that the unknown error distribution is spherically symmetric in a central region and completely unknown in the tail region. A precise formulation of the model is given, an analysis of the identifiable parameters in the model is presented, and consistent initial estimators of the identifiable parameters are constructed. Consistent and asymptotically normal M-estimators are constructed (solved iteratively beginning with the initial estimates) based on “influence functions” which vanish outside specified compact sets. Finally M-estimators which are asymptotically minimax (in the sense of Huber) are derived.  相似文献   

20.
A new four-parameter distribution called the exponentiated power Lindley–Poisson distribution which is an extension of the power Lindley and Lindley–Poisson distributions is introduced. Statistical properties of the distribution including the shapes of the density and hazard functions, moments, entropy measures, and distribution of order statistics are given. Maximum likelihood estimation technique is used to estimate the parameters. A simulation study is conducted to examine the bias, mean square error of the maximum likelihood estimators, and width of the confidence interval for each parameter. Finally, applications to real data sets are presented to illustrate the usefulness of the proposed distribution.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号