首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 93 毫秒
1.
Berkson (1980) conjectured that minimum x2 was a superior procedure to that of maximum likelihood, especially with regard to mean squared error. To explore his conjecture, we analyze his (1955) bioassay problem related to logistic regression. We consider not only the criterion of mean squared error for the comparison of these estimators, but also include alternative criteria such as concentration functions and Pitman's measure of closeness. The choice of these latter criteria is motivated by Rao's (1981) considerations of the shortcomings of mean squared error. We also include several Rao-Blackwellized versions of the minimum logit x2 the purpose of these comparisons.  相似文献   

2.
When the primary outcome is hard to collect, a surrogate endpoint is typically used as a substitute. However, even when a treatment has a positive average causal effect (ACE) on a surrogate endpoint, which also has a positive ACE on the primary outcome, it is still possible that the treatment has a negative ACE on the primary outcome. Such a phenomenon is called the surrogate paradox and greatly challenges the use of surrogates. In this paper, we provide criteria to exclude the surrogate paradox. Our criteria are optimal in the sense that they are sufficient and “almost necessary” to exclude the paradox: If the conditions are satisfied, the surrogate paradox is guaranteed to be absent, whereas if the conditions fail, there exists a data-generating process with surrogate paradox that can generate the same observed data. That is, our criteria capture all the observed information to exclude the surrogate paradox.  相似文献   

3.
Fisher's Linear Discriminant Function Can be used to classify an individual who has sampled from one of two multivariate normal Populations. In the following, this function is viewed as the other given his data vector it is assumed that the Population means and common covariance matrix are unknown. The vector of discriminant coeffients β(p×1) is the gradient of posterior log-odds and certain of its lineqar functions are directional derivatives which have a practical meaning. Accordingly, we treat the problems of estimating several linear functions of β The usual estimatoes of these functions are scaled versions of the unbiased estmators. In this Paper, these estimators are domainated by explicit alterenatives under a quadratic loss function. we reduce the problem of estimating β to that of estimating the inverse convariance matrix.  相似文献   

4.
列联表分析中的Simpson悖论问题   总被引:1,自引:0,他引:1  
对于分类数据,列联表无疑是最好的统计工具之一,但列联表分析也会带来Simpson悖论问题。从理论上来说,可以通过改变试验结构来消解Simpson悖论,但社会研究数据大多是观测数据,是无法通过试验来控制的,因此Simpson悖论与其说是"悖论",不如说是反映了分类数据的非线性特征,是"不可压缩"而压缩的结果,反映了列联表从高维压缩至低维时的统计信息差异,实质上是欧氏空间的降维问题。  相似文献   

5.
"This paper examines attempts to collect data on a politically controversial topic, race and ethnicity, in the British Census of Population in the post-war period. It discusses an indirect, proxy method of inferring race or ethnicity by asking for the country of birth of the respondent and of his parents, and a direct question where the respondent is asked to identify his racial or ethnic group. Different versions of the direct question are examined, as is the 1979 Census test, which resulted in considerable public resistance to the question. Following the exclusion of the direct question from the 1981 Census, the subject was reviewed by the Parliamentary Home Affairs Committee, the results of whose report--including practical suggestions as to question wording--are discussed."  相似文献   

6.
Noting that several rule discovery algorithms in data mining can produce a large number of irrelevant or obvious rules from data, there has been substantial research in data mining that addressed the issue of what makes rules truly 'interesting'. This resulted in the development of a number of interestingness measures and algorithms that find all interesting rules from data. However, these approaches have the drawback that many of the discovered rules, while supposed to be interesting by definition, may actually (1) be obvious in that they logically follow from other discovered rules or (2) be expected given some of the other discovered rules and some simple distributional assumptions. In this paper we argue that this is a paradox since rules that are supposed to be interesting, in reality are uninteresting for the above reason. We show that this paradox exists for various popular interestingness measures and present an abstract characterization of an approach to alleviate the paradox. We finally discuss existing work in data mining that addresses this issue and show how these approaches can be viewed with respect to the characterization presented here.  相似文献   

7.
In this paper, we consider a constructive representation of skewed distributions, which proposed by Ferreira and Steel (J Am Stat Assoc 101:823–829, 2006), and its basic properties is presented. We study the five versions of skew- normal distributions in this general setting. An appropriate empirical model for a skewed distribution is introduced. In data analysis, we compare this empirical model with the other four versions of skew-normal distributions, via some reasonable criteria. It is shown that the proposed empirical model has a better fit for density estimation.  相似文献   

8.
Simpson's paradox is a challenging topic to teach in an introductory statistics course. To motivate students to understand this paradox both intuitively and statistically, this article introduces several new ways to teach Simpson's paradox. We design a paper toss activity between instructors and students in class to engage students in the learning process. We show that Simpson's paradox widely exists in basketball statistics, and thus instructors may consider looking for Simpson's paradox in their own school basketball teams as examples to motivate students’ interest. A new probabilistic explanation of Simpson's paradox is provided, which helps foster students’ statistical understanding. Supplementary materials for this article are available online.  相似文献   

9.
Summary.  When a treatment has a positive average causal effect (ACE) on an intermediate variable or surrogate end point which in turn has a positive ACE on a true end point, the treatment may have a negative ACE on the true end point due to the presence of unobserved confounders, which is called the surrogate paradox. A criterion for surrogate end points based on ACEs has recently been proposed to avoid the surrogate paradox. For a continuous or ordinal discrete end point, the distributional causal effect (DCE) may be a more appropriate measure for a causal effect than the ACE. We discuss criteria for surrogate end points based on DCEs. We show that commonly used models, such as generalized linear models and Cox's proportional hazard models, can make the sign of the DCE of the treatment on the true end point determinable by the sign of the DCE of the treatment on the surrogate even if the models include unobserved confounders. Furthermore, for a general distribution without any assumption of parametric models, we give a sufficient condition for a distributionally consistent surrogate and prove that it is almost necessary.  相似文献   

10.
Drug-combination studies have become increasingly popular in oncology. One of the critical concerns in phase I drug-combination trials is the uncertainty in toxicity evaluation. Most of the existing phase I designs aim to identify the maximum tolerated dose (MTD) by reducing the two-dimensional searching space to one dimension via a prespecified model or splitting the two-dimensional space into multiple one-dimensional subspaces based on the partially known toxicity order. Nevertheless, both strategies often lead to complicated trials which may either be sensitive to model assumptions or induce longer trial durations due to subtrial split. We develop two versions of dynamic ordering design (DOD) for dose finding in drug-combination trials, where the dose-finding problem is cast in the Bayesian model selection framework. The toxicity order of dose combinations is continuously updated via a two-dimensional pool-adjacent-violators algorithm, and then the dose assignment for each incoming cohort is selected based on the optimal model under the dynamic toxicity order. We conduct extensive simulation studies to evaluate the performance of DOD in comparison with four other commonly used designs under various scenarios. Simulation results show that the two versions of DOD possess competitive performances in terms of correct MTD selection as well as safety, and we apply both versions of DOD to two real oncology trials for illustration.  相似文献   

11.
In this paper, we have reviewed and proposed several interval estimators for estimating the difference of means of two skewed populations. Estimators include the ordinary-t, two versions proposed by Welch [17] and Satterthwaite [15], three versions proposed by Zhou and Dinh [18], Johnson [9], Hall [8], empirical likelihood (EL), bootstrap version of EL, median t proposed by Baklizi and Kibria [2] and bootstrap version of median t. A Monte Carlo simulation study has been conducted to compare the performance of the proposed interval estimators. Some real life health related data have been considered to illustrate the application of the paper. Based on our findings, some possible good interval estimators for estimating the mean difference of two populations have been recommended for the researchers.  相似文献   

12.
The Bertrand paradox is that, whereas we can define in a unique way a point uniformly at random in the interior of a circle, uniformly random chords can be given a variety of competing specifications. This is generalized to spheres, and the distributions of the uniformly random line sections (chords) and plane sections (disks) are tabulated. This includes the large class which are constructed as uniformly random chords of uniformly random disk sections.  相似文献   

13.
The paper develops a general framework for the formulation of generic uniform laws of large numbers. In particular, we introduce a basic generic uniform law of large numbers that contains recent uniform laws of large numbers by Andrews [2] and Hoadley [9] as special cases. We also develop a truncation approach that makes it possible to obtain uniform laws of large numbers for the functions under consideration from uniform laws of large numbers for truncated versions of those functions. The point of the truncation approach is that uniform laws of large numbers for the truncated versions are typically easier to obtain. By combining the basic uniform law of large numbers and the truncation approach we also derive generalizations of recent uniform laws of large numbers introduced in Pötscher and Prucha [15, 16].  相似文献   

14.
西部地区的金融深化似乎并没有有效促进该地区的经济增长,运用协整理论、误差修正模型和格兰杰因果检验实证分析的结果也表明西部金融深化与经济增长之间确实存在着“悖论”。“悖论”出现的原因是西部地区金融深化难以有效积累资本,并且该地区经济投资效率低下。必须积极推进西部金融体制改革,提高西部金融业的运作效率。加大对西部地区的财政金融支持力度,提升西部产业结构层次,大力发展非国有经济,才能使西部金融深化有效促进该地区经济增长。  相似文献   

15.
This paper is concerned with the well known Jeffreys–Lindley paradox. In a Bayesian set up, the so-called paradox arises when a point null hypothesis is tested and an objective prior is sought for the alternative hypothesis. In particular, the posterior for the null hypothesis tends to one when the uncertainty, i.e., the variance, for the parameter value goes to infinity. We argue that the appropriate way to deal with the paradox is to use simple mathematics, and that any philosophical argument is to be regarded as irrelevant.  相似文献   

16.
The Stein, that one could improve frequentist risk by combining “independent” problems, has long been an intriguing paradox to statistics. We briefly review the Bayesian view of the paradox, and indicate that previous justifications of the Stein effect, through concerns of “Bayesian robustness,” were misleading. In the course of doing so, several existing robust Bayesian and Stein-effect estimators are compared for a variety of situations.  相似文献   

17.
The author shows how geostatistical data that contain measurement errors can be analyzed objectively by a Bayesian approach using Gaussian random fields. He proposes a reference prior and two versions of Jeffreys' prior for the model parameters. He studies the propriety and the existence of moments for the resulting posteriors. He also establishes the existence of the mean and variance of the predictive distributions based on these default priors. His reference prior derives from a representation of the integrated likelihood that is particularly convenient for computation and analysis. He further shows that these default priors are not very sensitive to some aspects of the design and model, and that they have good frequentist properties. Finally, he uses a data set of carbon/nitrogen ratios from an agricultural field to illustrate his approach.  相似文献   

18.
For the most common one-sample and two-sample tests in the gamma distribution we derive the log likelihood ratio tests and the improved versions obtained by a Bartlett adjustment. For most of these tests an exact test exists and we give the saddlepoint approximation to the latter. The tests are compared with previously published tests and a small simulation study is included.  相似文献   

19.
在排除模型误用以后,将经济学研究中针对同一问题的研究,不同计量模型的结果各不相同,甚至结论截然相反问题称为"计量模型多样性悖论"。分析此问题产生的原因,主要是关键变量的多样性、方程形式设定的多样性和计量模型的多样性,提出采用元分析来解决这个问题,指出在元分析应用过程中应力求单个模型的精确,尽可能多地采用合适的模型,注意同类性质的结果才能进行元分析。同时,讨论元分析带来的新问题,比如对研究团队的计量经济学水平要求较高,增加了成本,延长了研究的时间,也加大了论文的篇幅等,但这是计量经济学发展过程中的正常现象,并不涉及采用元分析解决"计量模型多样性悖论"问题的科学性。  相似文献   

20.

We address the testing problem of proportional hazards in the two-sample survival setting allowing right censoring, i.e., we check whether the famous Cox model is underlying. Although there are many test proposals for this problem, only a few papers suggest how to improve the performance for small sample sizes. In this paper, we do exactly this by carrying out our test as a permutation as well as a wild bootstrap test. The asymptotic properties of our test, namely asymptotic exactness under the null and consistency, can be transferred to both resampling versions. Various simulations for small sample sizes reveal an actual improvement of the empirical size and a reasonable power performance when using the resampling versions. Moreover, the resampling tests perform better than the existing tests of Gill and Schumacher and Grambsch and Therneau . The tests’ practical applicability is illustrated by discussing real data examples.

  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号