首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
2.
In this paper, we suggest an extension of the cumulative residual entropy (CRE) and call it generalized cumulative entropy. The proposed entropy not only retains attributes of the existing uncertainty measures but also possesses the absolute homogeneous property with unbounded support, which the CRE does not have. We demonstrate its mathematical properties including the entropy of order statistics and the principle of maximum general cumulative entropy. We also introduce the cumulative ratio information as a measure of discrepancy between two distributions and examine its application to a goodness-of-fit test of the logistic distribution. Simulation study shows that the test statistics based on the cumulative ratio information have comparable statistical power with competing test statistics.  相似文献   

3.
We obtain semiparametric efficiency bounds for estimation of a location parameter in a time series model where the innovations are stationary and ergodic conditionally symmetric martingale differences but otherwise possess general dependence and distributions of unknown form. We then describe an iterative estimator that achieves this bound when the conditional density functions of the sample are known. Finally, we develop a “semi-adaptive” estimator that achieves the bound when these densities are unknown by the investigator. This estimator employs nonparametric kernel estimates of the densities. Monte Carlo results are reported.  相似文献   

4.
Normalization of the bispectrum has been treated differently in the engineering signal processing literature from what is standard in the statistical time series literature. In the signal processing literature, normalization has been treated as a matter of definition and therefore a matter of choice and convenience. In particular, a number of investigators favor the Kim and Powers (Phys. Fluids 21 (8) (1978) 1452) or their “bicoherence” in Kim and Powers (IEEE Trans. Plasma Sci. PS-7 (2) (1979) 120) because they believe it produces a result guaranteed to be bounded by zero and one, and hence that it provides a result that is easily interpretable as the fraction of signal energy due to quadratic coupling. In this contribution, we show that wrong decisions can be obtained by relying on the (1979) normalization which is always bounded by one. This “bicoherence” depends on the resolution bandwidth of the sample bispectrum. Choice of normalization is not solely a matter of definition and this choice has empirical consequences. The term “bicoherence spectrum” is misleading since it is really a skewness spectrum. A statistical normalization is presented that provides a measure of quadratic coupling for stationary random nonlinear processes that has finite dependence.  相似文献   

5.
Hahn [Hahn, J. (1998). On the role of the propensity score in efficient semiparametric estimation of average treatment effects. Econometrica 66:315-331] derived the semiparametric efficiency bounds for estimating the average treatment effect (ATE) and the average treatment effect on the treated (ATET). The variance of ATET depends on whether the propensity score is known or unknown. Hahn attributes this to “dimension reduction.” In this paper, an alternative explanation is given: Knowledge of the propensity score improves upon the estimation of the distribution of the confounding variables.  相似文献   

6.
This paper considers computation of fitted values and marginal effects in the Box-Cox regression model. Two methods, 1 the “smearing” technique suggested by Duan (see Ref. [10]) and 2 direct numerical integration, are examined and compared with the “naive” method often used in econometrics.  相似文献   

7.
Many economic variables are fractionally integrated of order d, FI(d) with unequal d's. For modeling their long-run equilibria, we explain why the usual cointegration fails to exist and the unit root type tests have low power. Hence, we propose a looser concept called “tie integration”. A new numerical minimization problem reveals the value of d in the absence of tie integration, denoted by dnull. We use the d from residuals of a regression, as well as, dnull to devise a new index called strength of tie (SOT). An application quantifies market responsiveness.  相似文献   

8.
This paper reviews and extends the literature on the finite sample behavior of tests for sample selection bias. Monte Carlo results show that, when the “multicollinearity problem” identified by Nawata (1993) is severe, (i) the t-test based on the Heckman-Greene variance estimator can be unreliable, (ii) the Likelihood Ratio test remains powerful, and (iii) nonnormality can be interpreted as severe sample selection bias by Maximum Likelihood methods, leading to negative Wald statistics. We also confirm previous findings (Leung and Yu, 1996) that the standard regression-based t-test (Heckman, 1979) and the asymptotically efficient Lagrange Multiplier test (Melino, 1982), are robust to nonnormality but have very little power.  相似文献   

9.
This paper examines some of the economic and econometric issues that arise in attempting to measure the degree of concentration in an industry and its dynamic evolution. A general axiomatic basis is developed. We offer new measures of concentration over aggregated periods of time and provide a sound statistical basis for inferences. Concentration is one aspect of the problem of measuring “market power” within an industry. Modern economic analysis of antitrust issues does not focus only on the level of concentration, but still must examine the issue carefully. We contrast concentration at a point in time with a dynamic profile of change in the distribution of shares in a given market. Our methods are demonstrated with an application to the US steel industry.  相似文献   

10.
A generalized self-consistency approach to maximum likelihood estimation (MLE) and model building was developed in Tsodikov [2003. Semiparametric models: a generalized self-consistency approach. J. Roy. Statist. Soc. Ser. B Statist. Methodology 65(3), 759–774] and applied to a survival analysis problem. We extend the framework to obtain second-order results such as information matrix and properties of the variance. Multinomial model motivates the paper and is used throughout as an example. Computational challenges with the multinomial likelihood motivated Baker [1994. The Multinomial–Poisson transformation. The Statist. 43, 495–504] to develop the Multinomial–Poisson (MP) transformation for a large variety of regression models with multinomial likelihood kernel. Multinomial regression is transformed into a Poisson regression at the cost of augmenting model parameters and restricting the problem to discrete covariates. Imposing normalization restrictions by means of Lagrange multipliers [Lang, J., 1996. On the comparison of multinomial and Poisson log-linear models. J. Roy. Statist. Soc. Ser. B Statist. Methodology 58, 253–266] justifies the approach. Using the self-consistency framework we develop an alternative solution to multinomial model fitting that does not require augmenting parameters while allowing for a Poisson likelihood and arbitrary covariate structures. Normalization restrictions are imposed by averaging over artificial “missing data” (fake mixture). Lack of probabilistic interpretation at the “complete-data” level makes the use of the generalized self-consistency machinery essential.  相似文献   

11.
Long-run relations and common trends are discussed in terms of the multivariate cointegration model given in the autoregressive and the moving average form. The basic results needed for the analysis of I(1) and 1(2)processes are reviewed and the results applied to Danish monetary data. The test procedures reveal that nominal money stock is essentially I(2). Long-run price homogeneity is supported by the data and imposed on the system. It is found that the bond rate is weakly exogenous for the long-run parameters and therefore act as a driving trend. Using the nonstationarity property of the data, “excess money” is estimated and its effect on the other determinants of the system is investigated. In particular, it is found that “excess money” has no effect on price inflation.  相似文献   

12.
The focus of geographical studies in epidemiology has recently moved towards looking for effects of exposures based on data taken at local levels of aggregation (i.e. small areas). This paper investigates how regression coefficients measuring covariate effects at the point level are modified under aggregation. Changing the level of aggregation can lead to completely different conclusions about exposure–effect relationships, a phenomenon often referred to as ecological bias. With partial knowledge of the within‐area distribution of the exposure variable, the notion of maximum entropy can be used to approximate that part of the distribution that is unknown. From the approximation, an expression for the ecological bias is obtained; simulations and an example show that the maximum‐entropy approximation is often better than other commonly used approximations.  相似文献   

13.
The maximum likelihood estimator (MLE) in nonlinear panel data models with fixed effects is widely understood (with a few exceptions) to be biased and inconsistent when T, the length of the panel, is small and fixed. However, there is surprisingly little theoretical or empirical evidence on the behavior of the estimator on which to base this conclusion. The received studies have focused almost exclusively on coefficient estimation in two binary choice models, the probit and logit models. In this note, we use Monte Carlo methods to examine the behavior of the MLE of the fixed effects tobit model. We find that the estimator's behavior is quite unlike that of the estimators of the binary choice models. Among our findings are that the location coefficients in the tobit model, unlike those in the probit and logit models, are unaffected by the “incidental parameters problem.” But, a surprising result related to the disturbance variance emerges instead - the finite sample bias appears here rather than in the slopes. This has implications for estimation of marginal effects and asymptotic standard errors, which are also examined in this paper. The effects are also examined for the probit and truncated regression models, extending the range of received results in the first of these beyond the widely cited biases in the coefficient estimators.  相似文献   

14.
Stated preference choice experiments are routinely used in many areas from marketing to medicine. While results on the optimal choice sets to present for the forced choice setting have been determined in a variety of situations, no results have appeared to date on the optimal choice sets to use when either all choice sets are to contain a common base alternative or when all choice sets contain a “none of these” option. These problems are considered in this paper.  相似文献   

15.
Using divergence measures based on entropy functions, a procedure to test statistical hypotheses is proposed. Replacing the parameters by suitable estimators in the expresion of the divergence measure, the test statistics are obtained. Asymptotic distributions for these statistics are given in several cases when maximum likelihood estimators are considered, so they can be used to construct confidence intervals and to test statistical hypotheses based on one or more samples. These results can also be applied to multinomial populations. Tests of goodness of fit and tests of homogeneity can be constructed.  相似文献   

16.
Super-saturated designs in which the number of factors under investigation exceeds the number of experimental runs have been suggested for screening experiments initiated to identify important factors for future study. Most of the designs suggested in the literature are based on natural but ad hoc criteria. The “average s2” criteria introduced by Booth and Cox (Technometrics 4 (1962) 489) is a popular choice. Here, a decision theoretic approach is pursued leading to an optimality criterion based on misclassification probabilities in a Bayesian model. In certain cases, designs optimal under the average s2 criterion are also optimal for the new criterion. Necessary conditions for this to occur are presented. In addition, the new criterion often provides a strict preference between designs tied under the average s2 criterion, which is advantageous in numerical search as it reduces the number of local minima.  相似文献   

17.
In this paper, the estimation of parameters for a generalized inverted exponential distribution based on the progressively first-failure type-II right-censored sample is studied. An expectation–maximization (EM) algorithm is developed to obtain maximum likelihood estimates of unknown parameters as well as reliability and hazard functions. Using the missing value principle, the Fisher information matrix has been obtained for constructing asymptotic confidence intervals. An exact interval and an exact confidence region for the parameters are also constructed. Bayesian procedures based on Markov Chain Monte Carlo methods have been developed to approximate the posterior distribution of the parameters of interest and in addition to deduce the corresponding credible intervals. The performances of the maximum likelihood and Bayes estimators are compared in terms of their mean-squared errors through the simulation study. Furthermore, Bayes two-sample point and interval predictors are obtained when the future sample is ordinary order statistics. The squared error, linear-exponential and general entropy loss functions have been considered for obtaining the Bayes estimators and predictors. To illustrate the discussed procedures, a set of real data is analyzed.  相似文献   

18.
This paper is concerned with joint tests of non-nested models and simultaneous departures from homoskedasticity, serial independence and normality of the disturbance terms. Locally equivalent alternative models are used to construct joint tests since they provide a convenient way to incorporate more than one type of departure from the classical conditions. The joint tests represent a simple asymptotic solution to the “pre-testing” problem in the context of non-nested linear regression models. Our simulation results indicate that the proposed tests have good finite sample properties.  相似文献   

19.
ABSTRACT

In the literature of information theory, there exist many well known measures of entropy suitable for entropy optimization principles towards applications in different disciplines of science and technology. The object of this article is to develop a new generalized measure of entropy and to establish the relation between entropy and queueing theory. To fulfill our aim, we have made use of maximum entropy principle which provides the most uncertain probability distribution subject to some constraints expressed by mean values.  相似文献   

20.
ABSTRACT

We consider point and interval estimation of the unknown parameters of a generalized inverted exponential distribution in the presence of hybrid censoring. The maximum likelihood estimates are obtained using EM algorithm. We then compute Fisher information matrix using the missing value principle. Bayes estimates are derived under squared error and general entropy loss functions. Furthermore, approximate Bayes estimates are obtained using Tierney and Kadane method as well as using importance sampling approach. Asymptotic and highest posterior density intervals are also constructed. Proposed estimates are compared numerically using Monte Carlo simulations and a real data set is analyzed for illustrative purposes.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号