首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 240 毫秒
1.
It is well known that heterogeneity between studies in a meta-analysis can be either caused by diversity, for example, variations in populations and interventions, or caused by bias, that is, variations in design quality and conduct of the studies. Heterogeneity that is due to bias is difficult to deal with. On the other hand, heterogeneity that is due to diversity is taken into account by a standard random-effects model. However, such a model generally assumes that heterogeneity does not vary according to study-level variables such as the size of the studies in the meta-analysis and the type of study design used. This paper develops models that allow for this type of variation in heterogeneity and discusses the properties of the resulting methods. The models are fitted using the maximum-likelihood method and by modifying the Paule–Mandel method. Furthermore, a real-world argument is given to support the assumption that the inter-study variance is inversely proportional to study size. Under this assumption, the corresponding random-effects method is shown to be connected with standard fixed-effect meta-analysis in a way that may well appeal to many clinicians. The models and methods that are proposed are applied to data from two large systematic reviews.  相似文献   

2.
It is commonly asserted that the Gibbs sampler is a special case of the Metropolis–Hastings (MH) algorithm. While this statement is true for certain Gibbs samplers, it is not true in general for the version that is taught and used most often, namely, the deterministic scan Gibbs sampler. In this note, I prove that that there exist deterministic scan Gibbs samplers that do not exhibit detailed balance and hence cannot be considered MH samplers. The nuances of various Gibbs sampling schemes are discussed.  相似文献   

3.
It is shown that the limiting distribution of the augmented Dickey–Fuller (ADF) test under the null hypothesis of a unit root is valid under a very general set of assumptions that goes far beyond the linear AR(∞) process assumption typically imposed. In essence, all that is required is that the error process driving the random walk possesses a continuous spectral density that is strictly positive. Furthermore, under the same weak assumptions, the limiting distribution of the ADF test is derived under the alternative of stationarity, and a theoretical explanation is given for the well-known empirical fact that the test's power is a decreasing function of the chosen autoregressive order p. The intuitive reason for the reduced power of the ADF test is that, as p tends to infinity, the p regressors become asymptotically collinear.  相似文献   

4.
It is well known that if some observations in a sample from the probability density are not available, then in general the density cannot be estimated. A possible remedy is to use an auxiliary variable that explains the missing mechanism. For this setting a data-driven estimator is proposed that mimics performance of an oracle that knows all observations from the sample. It is also proved that the estimator adapts to unknown smoothness of the density and its mean integrated squared error converges with a minimax rate. A numerical study, together with the analysis of a real data, shows that the estimator is feasible for small samples.  相似文献   

5.
The kappa coefficient is a widely used measure for assessing agreement on a nominal scale. Weighted kappa is an extension of Cohen's kappa that is commonly used for measuring agreement on an ordinal scale. In this article, it is shown that weighted kappa can be computed as a function of unweighted kappas. The latter coefficients are kappa coefficients that correspond to smaller contingency tables that are obtained by merging categories.  相似文献   

6.
In this paper, we suggest a similar unit root test statistic for dynamic panel data with fixed effects. The test is based on the LM, or score, principle and is derived under the assumption that the time dimension of the panel is fixed, which is typical in many panel data studies. It is shown that the limiting distribution of the test statistic is standard normal. The similarity of the test with respect to both the initial conditions of the panel and the fixed effects is achieved by allowing for a trend in the model using a parameterisation that has the same interpretation under both the null and alternative hypotheses. This parameterisation can be expected to increase the power of the test statistic. Simulation evidence suggests that the proposed test has empirical size that is very close to the nominal level and considerably more power than other panel unit root tests that assume that the time dimension of the panel is large. As an application of the test, we re-examine the stationarity of real stock prices and dividends using disaggregated panel data over a relatively short period of time. Our results suggest that while real stock prices contain a unit root, real dividends are trend stationary.  相似文献   

7.
The stepwise regression algorithm that is widely used is due to Efroymson. He stated that the F-to-remove value had to be not greater than the F-to-enter value, but did not show that the algorithm could not cycle. Until now nobody appears to have shown this. To prove that the algorithm does converge, an objective function is introduced. It is shown that this objective function decreases or can occasionally remain constant at each step in the algorithm, and hence the algorithm cannot cycle provided that Efroymson's condition is satisfied.  相似文献   

8.
Consider the problem of testing the composite null hypothesis that a random sample X1,…,Xn is from a parent which is a member of a particular continuous parametric family of distributions against an alternative that it is from a separate family of distributions. It is shown here that in many cases a uniformly most powerful similar (UMPS) test exists for this problem, and, moreover, that this test is equivalent to a uniformly most powerful invariant (UMPI) test. It is also seen in the method of proof used that the UMPS test statistic Is a function of the statistics U1,…,Un?k obtained by the conditional probability integral transformations (CPIT), and thus that no Information Is lost by these transformations, It is also shown that these optimal tests have power that is a nonotone function of the null hypothesis class of distributions, so that, for example, if one additional parameter for the distribution is assumed known, then the power of the test can not lecrease. It Is shown that the statistics U1, …, Un?k are independent of the complete sufficient statistic, and that these statistics have important invariance properties. Two examples at given. The UMPS tests for testing the two-parameter uniform family against the two-parameter exponential family, and for testing one truncation parameter distribution against another one are derived.  相似文献   

9.
本文利用2002-2011年的制造业数据核算比较了中国与主要竞争国家的制造业单位劳动力成本,发现我国单位劳动力成本已经高于印尼、泰国和马来西亚等东南亚国家。具体地,东部地区单位劳动力成本2002年以后高于印尼,2007年超过我国中西部地区和泰国,2011年超过马来西亚;中西部地区单位劳动力成本2002年以后高于印尼,2009年超过泰国,2011年超过马来西亚,主要原因在于我国小时劳动力成本上升过快。因而,与印尼、泰国、马来西亚等东南亚国家相比,我国已不具有劳动力成本优势,再加上近年来我国人口红利的消失,中西部地区劳动力成本也已高于东南亚国家,因而,可以解释我国中西部地区为何没能及时接收东部地区的产业转移。  相似文献   

10.
Summary.  We report the results of a period change analysis of time series observations for 378 pulsating variable stars. The null hypothesis of no trend in expected periods is tested for each of the stars. The tests are non-parametric in that potential trends are estimated by local linear smoothers. Our testing methodology has some novel features. First, the null distribution of a test statistic is defined to be the distribution that results in repeated sampling from a population of stars. This distribution is estimated by means of a bootstrap algorithm that resamples from the collection of 378 stars. Bootstrapping in this way obviates the problem that the conditional sampling distribution of a statistic, given a particular star, may depend on unknown parameters of that star. Another novel feature of our test statistics is that one-sided cross-validation is used to choose the smoothing parameters of the local linear estimators on which they are based. It is shown that doing so results in tests that are tremendously more powerful than analogous tests that are based on the usual version of cross-validation. The positive false discovery rate method of Storey is used to account for the fact that we simultaneously test 378 hypotheses. We ultimately find that 56 of the 378 stars have changes in mean pulsation period that are significant when controlling the positive false discovery rate at the 5% level.  相似文献   

11.
内容提要:Admati和Pfleiderer [1]认为交易强度的增加,可能来自于知情交易也可能来自于流动性交易。本文通过分析中国股票市场上持续期间、交易量和波动率之间的关系,提供了识别知情交易和流动性交易的证据。与国外相关研究结论均不同的是,本文的实证结果认为:波动率与持续期间之间存在非线性关系,交易量较小时,交易强度的增加主要来自于流动性交易;而交易量较大时,交易强度的增加主要来自于知情交易。最后,本文对以上实证结果进行了稳健性检验,通过分析波动率日内特征对实证结果的影响,本文还发现,中国股票市场的知情交易通常发生在刚开盘的阶段。  相似文献   

12.
The smoothness of Tukey depth contours is a regularity condition often encountered in asymptotic theory, among others. This condition ensures that the Tukey depth fully characterizes the underlying multivariate probability distribution. In this paper we demonstrate that this regularity condition is rarely satisfied. It is shown that even well-behaved probability distributions with symmetrical, smooth and (strictly) quasi-concave densities may have non-smooth Tukey depth contours, and that the smoothness behaviour of depth contours is fairly unpredictable.  相似文献   

13.
Rank tests are considered that compare t treatments in repeated measures designs. A statistic is given that contains as special cases several that have been proposed for this problem, including one that corresponds to the randomized block ANOVA statistic applied to the rank transformed data. Another statistic is proposed, having a null distribution holding under more general conditions, that is the rank transform of the Hotelling statistic for repeated measures. A statistic of this type is also given for data that are ordered categorical rather than fully rankedo Unlike the Friedman statistic, the statistics discussed in this article utilize a single ranking of the entire sample. Power calculations for an underlying normal distribution indicate that the rank transformed ANOVA test can be substantially more powerful than the Friedman test.  相似文献   

14.
The problem of improving upon the usual set estimator of a multivariate normal mean has only recently seen significant advances. Improved sets that take advantage of the Stein effect have been constructed. It is shown here that the Stein effect is so powerful that one can construct improved confidence sets that can have zero radius on a set of positive probability. Other, somewhat more sensible, sets which attain arbitrarily small radius are also constructed, and it is argued that one way to eliminate unreasonable confidence sets is through a conditional evaluation.  相似文献   

15.
It is well known that the ratio and product estimators have the limitation of having efficiency not exceeding that of the linear regression estimator. This paper develops a new approach to ratio estimation that produces a more precise and efficient ratio estimator that is superior to the regression estimator both in efficiency and biasedness. An empirical study is given.  相似文献   

16.
Dimitrov and Khalil (1992) introduced a class of new probability distributions for modeling environmental evolution with periodic behavior. One of the key parameters in these distributions is α, the probability that the event being studied does not occur. In that article the authors derive an estimator for this parameter assuming a series of conditions. In this article it is shown that the estimator is valid under more general conditions, i.e. same of the assumptions are not necessary. It is shown that under the assumption that the elapsed time measured from the starting point of a period until the first occurrence time of the event given that the event occurred in this cycle is related to α, an approximate maximum likelihood estimator of a is proposed. The large sample properties of the estimator are discussed. Monte Carlo study is done for supporting the theoretical results.  相似文献   

17.
Most economists consider that the cases of negative information value that non-Bayesian decision makers seem to exhibit, clearly show that these models are not models representing rational behaviour. We consider this issue for Choquet Expected Utility maximizers in a simple framework, that is the problem of choosing on which event to bet. First, we find a necessary condition to prevent negative information vlaue that we call Separative Monotonicity. This is a weaker condition than Savage Sure thing Principle and it appears that necessity and possibility measures satisfy it and that we cand find conditioning rules such that the information value is always positive. In a second part, we question the way information value is usually measured and suggest that negative information values are merely resulting from an inadequate formula. Yet, we suggest to impose what appears as a weaker requirement, that is, the betting strategy should not be Statistically Dominated. We show for classical updating rules applied to belief functions that this requirement is violated. We consider a class of conditioning rules and exhibit a necessary and sufficient condition in order to satisfy the Statistical Dominance criterion in the case of belief functions. Received: November 2000; revised version: July 2001  相似文献   

18.
It is well known that, in the continuous case, the probability that two consecutive order statistics are equal to zero, whereas it is not true when the distribution is discrete. It is, perhaps, for this reason that order statistics from discrete distributions has not been investigated in the literature as much as from a continuous distribution. The main purpose of this paper, therefore, is to obtain the probability of ties when the distribution is discrete. Also it is shown that, in the discrete case, the Markov property does not hold good. However, the order statistics from a geometric distribution forms a Markov chain.  相似文献   

19.
Consider a two-by-two factorial experiment with more than one replicate. Suppose that we have uncertain prior information that the two-factor interaction is zero. We describe new simultaneous frequentist confidence intervals for the four population cell means, with simultaneous confidence coefficient 1 ? α, that utilize this prior information in the following sense. These simultaneous confidence intervals define a cube with expected volume that (a) is relatively small when the two-factor interaction is zero and (b) has maximum value that is not too large. Also, these intervals coincide with the standard simultaneous confidence intervals obtained by Tukey’s method, with simultaneous confidence coefficient 1 ? α, when the data strongly contradict the prior information that the two-factor interaction is zero. We illustrate the application of these new simultaneous confidence intervals to a real data set.  相似文献   

20.
也谈农业人口与非农业人口的统计问题   总被引:7,自引:0,他引:7  
时代在前进,经济在发展,观念要转变,思想要提高,生产关系要适应生产力的发展,一切规章制度要符合社会发展的要求,并在不断改进中逐步完善。而这一切又要在马列主义、毛泽东思想、邓小平理论和“三个代表”重要思想的指导下进行。笔者从事统计和经济实际工作以及教学工作多年,深  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号