首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Determining whether per capita output can be characterized by a stochastic trend is complicated by the fact that infrequent breaks in trend can bias standard unit root tests towards nonrejection of the unit root hypothesis. The bulk of the existing literature has focused on the application of unit root tests allowing for structural breaks in the trend function under the trend stationary alternative but not under the unit root null. These tests, however, provide little information regarding the existence and number of trend breaks. Moreover, these tests suffer from serious power and size distortions due to the asymmetric treatment of breaks under the null and alternative hypotheses. This article estimates the number of breaks in trend employing procedures that are robust to the unit root/stationarity properties of the data. Our analysis of the per capita gross domestic product (GDP) for Organization for Economic Cooperation and Development (OECD) countries thereby permits a robust classification of countries according to the “growth shift,” “level shift,” and “linear trend” hypotheses. In contrast to the extant literature, unit root tests conditional on the presence or absence of breaks do not provide evidence against the unit root hypothesis.  相似文献   

2.
In this paper confidence sequences are used to construct sequential procedures for selecting the population with the a common variance. These procedures are shown to provide substantial saving, particularly in the expected samplw sizes of the inferior populations,over various procedures in the literature. A new “indifference zone” formulation is given for the correct selection probability requirement, and confidence sequences are also applied to construct sequential procedures for this new selection goal.  相似文献   

3.
In many engineering problems it is necessary to draw statistical inferences on the mean of a lognormal distribution based on a complete sample of observations. Statistical demonstration of mean time to repair (MTTR) is one example. Although optimum confidence intervals and hypothesis tests for the lognormal mean have been developed, they are difficult to use, requiring extensive tables and/or a computer. In this paper, simplified conservative methods for calculating confidence intervals or hypothesis tests for the lognormal mean are presented. In this paper, “conservative” refers to confidence intervals (hypothesis tests) whose infimum coverage probability (supremum probability of rejecting the null hypothesis taken over parameter values under the null hypothesis) equals the nominal level. The term “conservative” has obvious implications to confidence intervals (they are “wider” in some sense than their optimum or exact counterparts). Applying the term “conservative” to hypothesis tests should not be confusing if it is remembered that this implies that their equivalent confidence intervals are conservative. No implication of optimality is intended for these conservative procedures. It is emphasized that these are direct statistical inference methods for the lognormal mean, as opposed to the already well-known methods for the parameters of the underlying normal distribution. The method currently employed in MIL-STD-471A for statistical demonstration of MTTR is analyzed and compared to the new method in terms of asymptotic relative efficiency. The new methods are also compared to the optimum methods derived by Land (1971, 1973).  相似文献   

4.
The joint probability density function, evaluated at the observed data, is commonly used as the likelihood function to compute maximum likelihood estimates. For some models, however, there exist paths in the parameter space along which this density-approximation likelihood goes to infinity and maximum likelihood estimation breaks down. In all applications, however, observed data are really discrete due to the round-off or grouping error of measurements. The “correct likelihood” based on interval censoring can eliminate the problem of an unbounded likelihood. This article categorizes the models leading to unbounded likelihoods into three groups and illustrates the density-approximation breakdown with specific examples. Although it is usually possible to infer how given data were rounded, when this is not possible, one must choose the width for interval censoring, so we study the effect of the round-off on estimation. We also give sufficient conditions for the joint density to provide the same maximum likelihood estimate as the correct likelihood, as the round-off error goes to zero.  相似文献   

5.
Interest centres on a group of statisticians , each supplied with the same n sample datapoint sandmaking formal Bayesian inference with a common likelihood function but differing prior knowledge and utility functions.

Definitions are proposed which quantify, in a commensurable way, the inference processes of “accuracy”, “confidence” and “consensus” for the case of hypothesis inference with a fixed sample size n.

The general significance of comparing the three quantifiers is considered. As n increases the asymptotic behaviour of the quantifiers is evaluated and it is found that the three rates of convergence are of the same order as a function of n. The results are interpreted and some of their implications are discussed.  相似文献   

6.
In most practical situations to which the analysis of variance tests are applied, they do not supply the information that the experimenter aims at. If, for example, in one-way ANOVA the hypothesis is rejected in actual application of the F-test, the resulting conclusion that the true means θ1,…,θk are not all equal, would by itself usually be insufficient to satisfy the experimenter. In fact his problems would begin at this stage. The experimenter may desire to select the “best” population or a subset of the “good” populations; he may like to rank the populations in order of “goodness” or he may like to draw some other inferences about the parameters of interest.

The extensive literature on selection and ranking procedures depends heavily on the use of independence between populations (block, treatments, etc.) in the analysis of variance. In practical applications, it is desirable to drop this assumption or independence and consider cases more general than the normal.

In the present paper, we derive a method to construct optimal (in some sense) selection procedures to select a nonempty subset of the k populations containing the best population as ranked in terms of θi’s which control the size of the selected subset and which maximizes the minimum average probability of selecting the best. We also consider the usual selection procedures in one-way ANOVA based on the generalized least squares estimates and apply the method to two-way layout case. Some examples are discussed and some results on comparisons with other procedures are also obtained.  相似文献   

7.
Abstract

Experiments in various countries with “last week” and “last month” reference periods for reporting of households’ food consumption have generally found that “week”-based estimates are higher. In India the National Sample Survey (NSS) has consistently found that “week”-based estimates are higher than month-based estimates for a majority of food item groups. But why are week-based estimates higher than month-based estimates? It has long been believed that the reason must be recall lapse, inherent in a long reporting period such as a month. But is household consumption of a habitually consumed item “recalled” in the same way as that of an item of infrequent consumption? And why doesn’t memory lapse cause over-reporting (over-assessment) as often as under-reporting? In this paper, we provide an alternative hypothesis, involving a “quantity floor effect” in reporting behavior, under which “week” may cause over-reporting for many items. We design a test to detect the effect postulated by this hypothesis and carry it out on NSS 68th round HCES data. The test results strongly suggest that our hypothesis provides a better explanation of the difference between week-based and month-based estimates than the recall lapse theory.  相似文献   

8.
Observed continuous random variates are effectively “discretized” by the limited accuracy with which they can be recorded. The effects of this on the likelihood function and on maximum likelihood estimation are often overlooked. Some published data for the exponential distribution are examined and the results compared with earlier studies. A note on estimation for the normal distribution is also given.  相似文献   

9.
By replacing the unknown random factors of factor analysis with observed macroeconomic variables, the arbitrage pricing theory (APT) is recast as a multivariate nonlinear regression model with across-equation restrictions. An explicit theoretical justification for the inclusion of an arbitrary, well-diversified market index is given. Using monthly returns on 70 stocks, iterated nonlinear seemingly unrelated regression techniques are employed to obtain joint estimates of asset sensitivities and their associated APT risk “prices.” Without the assumption of normally distributed errors, these estimators are strongly consistent and asymptotically normal. With the additional assumption of normal errors, they are also full-information maximum likelihood estimators. Classical asymptotic nonlinear nested hypothesis tests are supportive of the APT with measured macroeconomic factors.  相似文献   

10.
Many diagnostic tests may be available to identify a particular disease. Diagnostic performance can be potentially improved by combining. “Either” and “both” positive strategies for combining tests have been discussed in the literature, where a gain in diagnostic performance is measured by a ratio of positive (negative) likelihood ratio of the combined test to that of an individual test. Normal theory and bootstrap confidence intervals are constructed for gains in likelihood ratios. The performance (coverage probability, width) of the two methods are compared via simulation. All confidence intervals perform satisfactorily for large samples, while bootstrap performs better in smaller samples in terms of coverage and width.  相似文献   

11.
In this paper a derivation of the Akaike's Information Criterion (AIC) is presented to select the number of bins of a histogram given only the data, showing that AIC strikes a balance between the “bias” and “variance” of the histogram estimate. Consistency of the criterion is discussed, an asymptotically optimal histogram bin width for the criterion is derived and its relationship to penalized likelihood methods is shown. A formula relating the optimal number of bins for a sample and a sub-sample obtained from it is derived. A number of numerical examples are presented.  相似文献   

12.
In this article, the Brier score is used to investigate the importance of clustering for the frailty survival model. For this purpose, two versions of the Brier score are constructed, i.e., a “conditional Brier score” and a “marginal Brier score.” Both versions of the Brier score show how the clustering effects and the covariate effects affect the predictive ability of the frailty model separately. Using a Bayesian and a likelihood approach, point estimates and 95% credible/confidence intervals are computed. The estimation properties of both procedures are evaluated in an extensive simulation study for both versions of the Brier score. Further, a validation strategy is developed to calculate an internally validated point estimate and credible/confidence interval. The ensemble of the developments is applied to a dental dataset.  相似文献   

13.
We consider small sample equivalence tests for exponentialy. Statistical inference in this setting is particularly challenging since equivalence testing procedures typically require much larger sample sizes, in comparison with classical “difference tests,” to perform well. We make use of Butler's marginal likelihood for the shape parameter of a gamma distribution in our development of small sample equivalence tests for exponentiality. We consider two procedures using the principle of confidence interval inclusion, four Bayesian methods, and the uniformly most powerful unbiased (UMPU) test where a saddlepoint approximation to the intractable distribution of a canonical sufficient statistic is used. We perform small sample simulation studies to assess the bias of our various tests and show that all of the Bayes posteriors we consider are integrable. Our simulation studies show that the saddlepoint-approximated UMPU method performs remarkably well for small sample sizes and is the only method that consistently exhibits an empirical significance level close to the nominal 5% level.  相似文献   

14.
Teresa Ledwina 《Statistics》2013,47(1):105-118
We state some necessary and sufficient conditions for admissibility of tests for a simple and a composite null hypotheses against ”one-sided” alternatives on multivariate exponential distributions with discrete support.

Admissibility of the maximum likelihood test for “one –sided” alternatives and z χ2test for the independence hypothesis in r× scontingency tables is deduced among others.  相似文献   

15.
余静文等 《统计研究》2021,38(4):89-102
中国银行业在金融体系中起着关键性的作用,一直是金融发展的重要部分。进入21世纪以来,中国银行业以国有银行为主的结构出现了重大变革,银行业竞争程度不断提高。关于银行业竞争的经济效应存在“市场势力假说”和“信息假说”两个假说,本文尝试在银行准入管制放松政策的背景下和鼓励企业“走出去”的情境下,利用匹配的微观层面数据来实证检验以上的假说,主要得到以下几个结论:首先,银行业“松绑”有助于企业“走出去”; 其次,采取倾向性得分匹配方法应对样本选择问题, 并用工具变量方法应对内生性问题后,这一结论依然成立;最后,银行业“松绑”引起的融资成本下降是银行业“松绑”影响企业“走出去”的重要渠道,企业“走出去”的分析结果也支持了中国情境下银行业“松绑”的“市场势力假说”。本文的研究为银行业改革与企业对外直接投资的关系提供了重要证据,并验证了中国情境下银行业“松绑”的“市场势力假说”和“信息假说”,有助于更加深刻地理解和评估中国银行业改革的经济效应,对更好推动“一带一路”建设有着重要意义。  相似文献   

16.
Four basic strands in the disequilibrium literature are identified. Some examples are discussed and the canonical econometric disequilibrium model and its estimation are dealt with in detail. Specific criticisms of the canonical model,dealing with price and wage rigidity, with the nature of the min condition and the price-adjustment equation, are considered and a variety of modifications is entertained. Tests of the “equilibrium vs. disequilibrium” hypothesis are discussed, as well as several classes of models that may switch between equilibrium and disequilibrium modes. Finally, consideration is given to multimarket disequilibrium models with particular emphasis on the problems of coherence and estimation.  相似文献   

17.
Recently, many articles have obtained analytical expressions for the biases of various maximum likelihood estimators, despite their lack of closed-form solution. These bias expressions have provided an attractive alternative to the bootstrap. Unless the bias function is “flat,” however, the expressions are being evaluated at the wrong point(s). We propose an “improved” analytical bias-adjusted estimator, in which the bias expression is evaluated at a more appropriate point (at the bias adjusted estimator itself). Simulations illustrate that the improved analytical bias-adjusted estimator can eliminate significantly more bias than the simple estimator, which has been well established in the literature.  相似文献   

18.
ABSTRACT

This article examines the evidence contained in t statistics that are marginally significant in 5% tests. The bases for evaluating evidence are likelihood ratios and integrated likelihood ratios, computed under a variety of assumptions regarding the alternative hypotheses in null hypothesis significance tests. Likelihood ratios and integrated likelihood ratios provide a useful measure of the evidence in favor of competing hypotheses because they can be interpreted as representing the ratio of the probabilities that each hypothesis assigns to observed data. When they are either very large or very small, they suggest that one hypothesis is much better than the other in predicting observed data. If they are close to 1.0, then both hypotheses provide approximately equally valid explanations for observed data. I find that p-values that are close to 0.05 (i.e., that are “marginally significant”) correspond to integrated likelihood ratios that are bounded by approximately 7 in two-sided tests, and by approximately 4 in one-sided tests.

The modest magnitude of integrated likelihood ratios corresponding to p-values close to 0.05 clearly suggests that higher standards of evidence are needed to support claims of novel discoveries and new effects.  相似文献   

19.
LONG-RUN STRUCTURAL MODELLING   总被引:3,自引:0,他引:3  
The paper develops a general framework for identification, estimation, and hypothesis testing in cointegrated systems when the cointegrating coefficients are subject to (possibly) non-linear and cross-equation restrictions, obtained from economic theory or other relevant a priori information. It provides a proof of the consistency of the quasi maximum likelihood estimators (QMLE), establishes the relative rates of convergence of the QMLE of the short-run and the long-run parameters, and derives their asymptotic distributions; thus generalizing the results already available in the literature for the linear case. The paper also develops tests of the over-identifying (possibly) non-linear restrictions on the cointegrating vectors. The estimation and hypothesis testing procedures are applied to an Almost Ideal Demand System estimated on U.K. quarterly observations. Unlike many other studies of consumer demand this application does not treat relative prices and real per capita expenditures as exogenously given.  相似文献   

20.
In many areas of application mixed linear models serve as a popular tool for analyzing highly complex data sets. For inference about fixed effects and variance components, likelihood-based methods such as (restricted) maximum likelihood estimators, (RE)ML, are commonly pursued. However, it is well-known that these fully efficient estimators are extremely sensitive to small deviations from hypothesized normality of random components as well as to other violations of distributional assumptions. In this article, we propose a new class of robust-efficient estimators for inference in mixed linear models. The new three-step estimation procedure provides truncated generalized least squares and variance components' estimators with hard-rejection weights adaptively computed from the data. More specifically, our data re-weighting mechanism first detects and removes within-subject outliers, then identifies and discards between-subject outliers, and finally it employs maximum likelihood procedures on the “clean” data. Theoretical efficiency and robustness properties of this approach are established.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号