首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 578 毫秒
1.
It is well known that heterogeneity between studies in a meta-analysis can be either caused by diversity, for example, variations in populations and interventions, or caused by bias, that is, variations in design quality and conduct of the studies. Heterogeneity that is due to bias is difficult to deal with. On the other hand, heterogeneity that is due to diversity is taken into account by a standard random-effects model. However, such a model generally assumes that heterogeneity does not vary according to study-level variables such as the size of the studies in the meta-analysis and the type of study design used. This paper develops models that allow for this type of variation in heterogeneity and discusses the properties of the resulting methods. The models are fitted using the maximum-likelihood method and by modifying the Paule–Mandel method. Furthermore, a real-world argument is given to support the assumption that the inter-study variance is inversely proportional to study size. Under this assumption, the corresponding random-effects method is shown to be connected with standard fixed-effect meta-analysis in a way that may well appeal to many clinicians. The models and methods that are proposed are applied to data from two large systematic reviews.  相似文献   

2.
In this article, we present the problem of selecting a good stochastic system with high probability and minimum total simulation cost when the number of alternatives is very large. We propose a sequential approach that starts with the Ordinal Optimization procedure to select a subset that overlaps with the set of the actual best m% systems with high probability. Then we use Optimal Computing Budget Allocation to allocate the available computing budget in a way that maximizes the Probability of Correct Selection. This is followed by a Subset Selection procedure to get a smaller subset that contains the best system among the subset that is selected before. Finally, the Indifference-Zone procedure is used to select the best system among the survivors in the previous stage. The numerical test involved with all these procedures shows the results for selecting a good stochastic system with high probability and a minimum number of simulation samples, when the number of alternatives is large. The results also show that the proposed approach is able to identify a good system in a very short simulation time.  相似文献   

3.
Most economists consider that the cases of negative information value that non-Bayesian decision makers seem to exhibit, clearly show that these models are not models representing rational behaviour. We consider this issue for Choquet Expected Utility maximizers in a simple framework, that is the problem of choosing on which event to bet. First, we find a necessary condition to prevent negative information vlaue that we call Separative Monotonicity. This is a weaker condition than Savage Sure thing Principle and it appears that necessity and possibility measures satisfy it and that we cand find conditioning rules such that the information value is always positive. In a second part, we question the way information value is usually measured and suggest that negative information values are merely resulting from an inadequate formula. Yet, we suggest to impose what appears as a weaker requirement, that is, the betting strategy should not be Statistically Dominated. We show for classical updating rules applied to belief functions that this requirement is violated. We consider a class of conditioning rules and exhibit a necessary and sufficient condition in order to satisfy the Statistical Dominance criterion in the case of belief functions. Received: November 2000; revised version: July 2001  相似文献   

4.
This article suggests an alternative to the ratio estimator for estimating the total size of a subdomain of a population. The application that served as the genesis for this work is from auditing. The problem is to estimate the total of sales transactions that are not tax exempt from an audit sample of the population of nontaxed sales transactions. A superpopulation approach, which models the unit's probability of belonging to the subdomain as a function of its size, leads to a family of estimators. The simplest member of this famiiy is one in which that function is specified to be a constant. The optimal estimator for this model performs markedly better than the ratio estimator when the assumption is true and often performs better when it is not, though in that case it is biased. Stratification is shown to reduce this bias and at the same time make the ratio estimator more similar to the optimal estimator. A simulation experiment shows that the theoretical advantages hold in a real audit population.  相似文献   

5.
Quadratic forms capture multivariate information in a single number, making them useful, for example, in hypothesis testing. When a quadratic form is large and hence interesting, it might be informative to partition the quadratic form into contributions of individual variables. In this paper it is argued that meaningful partitions can be formed, though the precise partition that is determined will depend on the criterion used to select it. An intuitively reasonable criterion is proposed and the partition to which it leads is determined. The partition is based on a transformation that maximises the sum of the correlations between individual variables and the variables to which they transform under a constraint. Properties of the partition, including optimality properties, are examined. The contributions of individual variables to a quadratic form are less clear‐cut when variables are collinear, and forming new variables through rotation can lead to greater transparency. The transformation is adapted so that it has an invariance property under such rotation, whereby the assessed contributions are unchanged for variables that the rotation does not affect directly. Application of the partition to Hotelling's one‐ and two‐sample test statistics, Mahalanobis distance and discriminant analysis is described and illustrated through examples. It is shown that bootstrap confidence intervals for the contributions of individual variables to a partition are readily obtained.  相似文献   

6.
7.
Summary.  Multiple linear regression techniques are applied to determine the relative batting and bowling strengths and a common home advantage for teams playing both innings of international one-day cricket and the first innings of a test-match. It is established that in both forms of the game Australia and South Africa were rated substantially above the other teams. It is also shown that home teams generally enjoyed a significant advantage. Using the relative batting and bowling strengths of teams, together with parameters that are associated with common home advantage, winning the toss and the establishment of a first-innings lead, multinomial logistic regression techniques are applied to explore further how these factors critically affect outcomes of test-matches. It is established that in test cricket a team's first-innings batting and bowling strength, first-innings lead, batting order and home advantage are strong predictors of a winning match outcome. Contrary to popular opinion, it is found that the team batting second in a test enjoys a significant advantage. Notably, the relative superiority of teams during the fourth innings of a test-match, but not the third innings, is a strong predictor of a winning outcome. There is no evidence to suggest that teams generally gained a winning advantage as a result of winning the toss.  相似文献   

8.
We consider two related aspects of the study of old‐age mortality. One is the estimation of a parameterized hazard function from grouped data, and the other is its possible deceleration at extreme old age owing to heterogeneity described by a mixture of distinct sub‐populations. The first is treated by half of a logistic transform, which is known to be free of discretization bias at older ages, and also preserves the increasing slope of the log hazard in the Gompertz case. It is assumed that data are available in the form published by official statistical agencies, that is, as aggregated frequencies in discrete time. Local polynomial modelling and weighted least squares are applied to cause‐of‐death mortality counts. The second, related, problem is to discover what conditions are necessary for population mortality to exhibit deceleration for a mixture of Gompertz sub‐populations. The general problem remains open but, in the case of three groups, we demonstrate that heterogeneity may be such that it is possible for a population to show decelerating mortality and then return to a Gompertz‐like increase at a later age. This implies that there are situations, depending on the extent of heterogeneity, in which there is at least one age interval in which the hazard function decreases before increasing again.  相似文献   

9.
Birnbaum's proof that C and M imply L, would lose its force if it is shown that in some situations M is not acceptable. Godambe (1979) has shown that Birnbaum's M is not as obvious or intuitive as the concept that a ‘mere relabelling’ of sample points should make no difference to the interference that can appropriately be drawn from a particular outcome of a given experiment. Akaike (1982) has shown that in certain situations M amounts to the assertion that a relabelling of sample points involving a false reporting of the outcome of an experiment should make no difference to the inference drawn from a particular outcome of a given experiment. It is shown in this paper that in the situation discussed by Akaike, even if M were to be considered acceptable, it is only a modified conditionality principle C? and M which can formally imply L; Birnbaum's conditionality principle C and M do not imply L.  相似文献   

10.
This paper considers the problem of making statistical inferences about a parameter when a narrow interval centred at a given value of the parameter is considered special, which is interpreted as meaning that there is a substantial degree of prior belief that the true value of the parameter lies in this interval. A clear justification of the practical importance of this problem is provided. The main difficulty with the standard Bayesian solution to this problem is discussed and, as a result, a pseudo-Bayesian solution is put forward based on determining lower limits for the posterior probability of the parameter lying in the special interval by means of a sensitivity analysis. Since it is not assumed that prior beliefs necessarily need to be expressed in terms of prior probabilities, nor that post-data probabilities must be Bayesian posterior probabilities, hybrid methods of inference are also proposed that are based on specific ways of measuring and interpreting the classical concept of significance. The various methods that are outlined are compared and contrasted at both a foundational level, and from a practical viewpoint by applying them to real data from meta-analyses that appeared in a well-known medical article.  相似文献   

11.
Summary. In England, so-called 'league tables' based on examination results and test scores are published annually, ostensibly to inform parental choice of secondary schools. A crucial limitation of these tables is that the most recent published information is based on the current performance of a cohort of pupils who entered secondary schools several years earlier, whereas for choosing a school it is the future performance of the current cohort that is of interest. We show that there is substantial uncertainty in predicting such future performance and that incorporating this uncertainty leads to a situation where only a handful of schools' future performances can be separated from both the overall mean and from one another with an acceptable degree of precision. This suggests that school league tables, including value-added tables, have very little to offer as guides to school choice.  相似文献   

12.
This article proposes a new data‐based prior distribution for the error variance in a Gaussian linear regression model, when the model is used for Bayesian variable selection and model averaging. For a given subset of variables in the model, this prior has a mode that is an unbiased estimator of the error variance but is suitably dispersed to make it uninformative relative to the marginal likelihood. The advantage of this empirical Bayes prior for the error variance is that it is centred and dispersed sensibly and avoids the arbitrary specification of hyperparameters. The performance of the new prior is compared to that of a prior proposed previously in the literature using several simulated examples and two loss functions. For each example our paper also reports results for the model that orthogonalizes the predictor variables before performing subset selection. A real example is also investigated. The empirical results suggest that for both the simulated and real data, the performance of the estimators based on the prior proposed in our article compares favourably with that of a prior used previously in the literature.  相似文献   

13.
It is known that patients may cease participating in a longitudinal study and become lost to follow-up. The objective of this article is to present a Bayesian model to estimate the malaria transition probabilities considering individuals lost to follow-up. We consider a homogeneous population, and it is assumed that the considered period of time is small enough to avoid two or more transitions from one state of health to another. The proposed model is based on a Gibbs sampling algorithm that uses information of lost to follow-up at the end of the longitudinal study. To simulate the unknown number of individuals with positive and negative states of malaria at the end of the study and lost to follow-up, two latent variables were introduced in the model. We used a real data set and a simulated data to illustrate the application of the methodology. The proposed model showed a good fit to these data sets, and the algorithm did not show problems of convergence or lack of identifiability. We conclude that the proposed model is a good alternative to estimate probabilities of transitions from one state of health to the other in studies with low adherence to follow-up.  相似文献   

14.
The power of a clinical trial is partly dependent upon its sample size. With continuous data, the sample size needed to attain a desired power is a function of the within-group standard deviation. An estimate of this standard deviation can be obtained during the trial itself based upon interim data; the estimate is then used to re-estimate the sample size. Gould and Shih proposed a method, based on the EM algorithm, which they claim produces a maximum likelihood estimate of the within-group standard deviation while preserving the blind, and that the estimate is quite satisfactory. However, others have claimed that the method can produce non-unique and/or severe underestimates of the true within-group standard deviation. Here the method is thoroughly examined to resolve the conflicting claims and, via simulation, to assess its validity and the properties of its estimates. The results show that the apparent non-uniqueness of the method's estimate is due to an apparently innocuous alteration that Gould and Shih made to the EM algorithm. When this alteration is removed, the method is valid in that it produces the maximum likelihood estimate of the within-group standard deviation (and also of the within-group means). However, the estimate is negatively biased and has a large standard deviation. The simulations show that with a standardized difference of 1 or less, which is typical in most clinical trials, the standard deviation from the combined samples ignoring the groups is a better estimator, despite its obvious positive bias.  相似文献   

15.
In this paper, we suggest a similar unit root test statistic for dynamic panel data with fixed effects. The test is based on the LM, or score, principle and is derived under the assumption that the time dimension of the panel is fixed, which is typical in many panel data studies. It is shown that the limiting distribution of the test statistic is standard normal. The similarity of the test with respect to both the initial conditions of the panel and the fixed effects is achieved by allowing for a trend in the model using a parameterisation that has the same interpretation under both the null and alternative hypotheses. This parameterisation can be expected to increase the power of the test statistic. Simulation evidence suggests that the proposed test has empirical size that is very close to the nominal level and considerably more power than other panel unit root tests that assume that the time dimension of the panel is large. As an application of the test, we re-examine the stationarity of real stock prices and dividends using disaggregated panel data over a relatively short period of time. Our results suggest that while real stock prices contain a unit root, real dividends are trend stationary.  相似文献   

16.
It is well known that if some observations in a sample from the probability density are not available, then in general the density cannot be estimated. A possible remedy is to use an auxiliary variable that explains the missing mechanism. For this setting a data-driven estimator is proposed that mimics performance of an oracle that knows all observations from the sample. It is also proved that the estimator adapts to unknown smoothness of the density and its mean integrated squared error converges with a minimax rate. A numerical study, together with the analysis of a real data, shows that the estimator is feasible for small samples.  相似文献   

17.
This paper demonstrates how to plan a contingent valuation experiment to assess the value of ecologically produced clothes. First, an appropriate statistical model (the trinomial spike model) that describes the probability that a randomly selected individual will accept any positive bid, and if so, will accept the bid A, is defined. Secondly, an optimization criterion that is a function of the variances of the parameter estimators is chosen. However, the variances of the parameter estimators in this model depend on the true parameter values. Pilot study data are therefore used to obtain estimates of the parameter values and a locally optimal design is found. Because this design is only optimal given that the estimated parameter values are correct, a design that minimizes the maximum of the criterion function over a plausable parameter region (i.e. a minimax design) is then found.  相似文献   

18.
Abstract

It is widely acknowledged that the biomedical literature suffers from a surfeit of false positive results. Part of the reason for this is the persistence of the myth that observation of p?<?0.05 is sufficient justification to claim that you have made a discovery. It is hopeless to expect users to change their reliance on p-values unless they are offered an alternative way of judging the reliability of their conclusions. If the alternative method is to have a chance of being adopted widely, it will have to be easy to understand and to calculate. One such proposal is based on calculation of false positive risk(FPR). It is suggested that p-values and confidence intervals should continue to be given, but that they should be supplemented by a single additional number that conveys the strength of the evidence better than the p-value. This number could be the minimum FPR (that calculated on the assumption of a prior probability of 0.5, the largest value that can be assumed in the absence of hard prior data). Alternatively one could specify the prior probability that it would be necessary to believe in order to achieve an FPR of, say, 0.05.  相似文献   

19.
In this paper, we propose a method to model the relationship between degradation and failure time for a simple step-stress test where the underlying degradation path is linear and different causes of failure are possible. It is assumed that the intensity function depends only on the degradation value. No assumptions are made about the distribution of the failure times. A simple step-stress test is used to induce failure experimentally and a tampered failure rate model is proposed to describe the effect of the changing stress on the intensities. We assume that some of the products that fail during the test have a cause of failure that is only known to belong to a certain subset of all possible failures. This case is known as masking. In the presence of masking, the maximum likelihood estimates of the model parameters are obtained through the expectation–maximization algorithm by treating the causes of failure as missing values. The effect of incomplete information on the estimation of parameters is studied through a Monte-Carlo simulation. Finally, a real-world example is analysed to illustrate the application of the proposed methods.  相似文献   

20.
Conventional clinical trial design involves considerations of power, and sample size is typically chosen to achieve a desired power conditional on a specified treatment effect. In practice, there is considerable uncertainty about what the true underlying treatment effect may be, and so power does not give a good indication of the probability that the trial will demonstrate a positive outcome. Assurance is the unconditional probability that the trial will yield a ‘positive outcome’. A positive outcome usually means a statistically significant result, according to some standard frequentist significance test. The assurance is then the prior expectation of the power, averaged over the prior distribution for the unknown true treatment effect. We argue that assurance is an important measure of the practical utility of a proposed trial, and indeed that it will often be appropriate to choose the size of the sample (and perhaps other aspects of the design) to achieve a desired assurance, rather than to achieve a desired power conditional on an assumed treatment effect. We extend the theory of assurance to two‐sided testing and equivalence trials. We also show that assurance is straightforward to compute in some simple problems of normal, binary and gamma distributed data, and that the method is not restricted to simple conjugate prior distributions for parameters. Several illustrations are given. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号