首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 10 毫秒
1.
A militarized interstate dispute (MID) involves military conflict between states with diplomatic ties and exists because two or more states have failed to resolve their differences through diplomatic channels. Jones et al. (1996) characterize an MID as the threat, display or use of military force short of war. They analyze over 2000 disputes spanning two centuries across the globe and conclude that disputes tend to be persistent once established. In this paper, I find that the passage of time can be a favorable factor in dispute resolution, and thus historical mechanisms for dispute resolution favor ending, not extending, militarized disputes. I emphasize the use of non-parametric procedures first to estimate the hazard function and then to estimate the benefits of negotiated settlements.  相似文献   

2.
《Significance》2004,1(3):121-121
At my first successful interview for a lecturing post, a learned consultant asked: "What is the purpose of teaching medical students statistics?" He obviously doubted the necessity, but had to sit on the appointments panel since the medical faculty has stumped up the money. I replied rather sanctimoniously: "So that patients receive better care" It may not have been Descartes but it got me the job, and I have never stopped believing it.  相似文献   

3.
When statisticians are uncertain as to which parametric statistical model to use to analyse experimental data, they will often resort to a non-parametric approach. The purpose of this paper is to provide insight into a simple approach to take when it is unclear as to the appropriate parametric model and plan to conduct a Bayesian analysis. I introduce an approximate, or substitution likelihood, first proposed by Harold Jeffreys in 1939 and show how to implement the approach combined with both a non-informative and an informative prior to provide a random sample from the posterior distribution of the median of the unknown distribution. The first example I use to demonstrate the approach is a within-patient bioequivalence design and then show how to extend the approach to a parallel group design.  相似文献   

4.
This study examines whether real interest rates exhibit changes in persistence for a panel of Organization of Economic Cooperation and Development countries. The findings show that for long-term real interest rates there are changes in persistence from I(0) to I(1). For short-term real interest rates, the results display the absence of changes in persistence, while under cross-sectional dependence there is only weak evidence of changes in persistence from I(1) to I(0). The evidence of changes in persistence when the direction is considered unknown is even weaker.  相似文献   

5.
Anthony Edwards wrote this cautionary tale for genetics students at Stanford University whom he was teaching in 1965. It has not previously been published. "Its appearance now is due to my having been asked whether a copy from the papers of the Nobel Laureate Joshua Lederberg might be put on the web by the US National Library of Medicine", he says. "Lederberg was Professor of Genetics at Stanford at the time and I must have given him a copy. More remarkably, he thought it worth keeping." It concerns what is known as Simpson's paradox.  相似文献   

6.
ABSTRACT

In this article I will review six textbooks commonly set in University undergraduate nonparametric statistics courses. The books will be evaluated in terms of how key statistical concepts are presented; use of software; exercises; and location on a theory-applications axis and an algorithms-principles axis. The placement of books on these axes provides a novel guide for instructors looking for the book that best fits their approach to teaching nonparametric statistics.  相似文献   

7.
Adaptive sample size adjustment (SSA) for clinical trials consists of examining early subsets of on trial data to adjust estimates of sample size requirements. Blinded SSA is often preferred over unblinded SSA because it obviates many logistical complications of the latter and generally introduces less bias. On the other hand, current blinded SSA methods for binary data offer little to no new information about the treatment effect, ignore uncertainties associated with the population treatment proportions, and/or depend on enhanced randomization schemes that risk partial unblinding. I propose an innovative blinded SSA method for use when the primary analysis is a non‐inferiority or superiority test regarding a risk difference. The method incorporates evidence about the treatment effect via the likelihood function of a mixture distribution. I compare the new method with an established one and with the fixed sample size study design, in terms of maximization of an expected utility function. The new method maximizes the expected utility better than do the comparators, under a range of assumptions. I illustrate the use of the proposed method with an example that incorporates a Bayesian hierarchical model. Lastly, I suggest topics for future study regarding the proposed methods. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

8.
Three situations are cited when caution is needed in using statistical computing packages: (a) when analyzing data and having insufficient statistical knowledge to completely understand the output; (b) when teaching the use of packages in a statistics course, to the exclusion of teaching statistics; and (c) when using packages in subject-matter teaching, without teaching the statistical methods underlying the packages.  相似文献   

9.
Multivariate control charts are powerful and simple visual tools for monitoring the quality of a process. This multivariate monitoring is carried out by considering simultaneously several correlated quality characteristics and by determining whether these characteristics are in control or out of control. In this paper, we propose a robust methodology using multivariate quality control charts for subgroups based on generalized Birnbaum–Saunders distributions and an adapted Hotelling statistic. This methodology is constructed for Phases I and II of control charts. We estimate the corresponding parameters with the maximum likelihood method and use parametric bootstrapping to obtain the distribution of the adapted Hotelling statistic. In addition, we consider the Mahalanobis distance to detect multivariate outliers and use it to assess the adequacy of the distributional assumption. A Monte Carlo simulation study is conducted to evaluate the proposed methodology and to compare it with a standard methodology. This study reports the good performance of our methodology. An illustration with real-world air quality data of Santiago, Chile, is provided. This illustration shows that the methodology is useful for alerting early episodes of extreme air pollution, thus preventing adverse effects on human health.  相似文献   

10.
In this paper, a generalization of the two-parameter partial credit model (2PL-PCM) and of two special cases, the partial credit model (PCM) and the rating scale model (RSM), with a hierarchical data structure will be presented. Having shown how 2PL-PCM, as with other item response theory (IRT) models, may be read in terms of a generalized linear mixed model (GLMM) with two aggregation levels, a presentation will be given of an extension to the case of measuring the latent trait of individuals aggregated in groups. The use of this Multilevel IRT model will be illustrated via reference to the evaluation of university teaching by students following the courses. The aim is to generate a ranking of teaching on the basis of student satisfaction, so as to give teachers, and those responsible for organizing study courses, a background of information that takes the opinions of the direct target group for university teaching (that is, the students) into account, in the context of improving the teaching courses available.  相似文献   

11.
目前我国正提倡“以法治国、以德治国”的发展战略,所以在各学科教学中渗透德育,具有非常重要的现实意义。本文仅就在统计教学中如何渗透德育问题,提出了渗透德育的可行性和必要性,着重介绍渗透德育的途径,以及当前急待解决的问题。  相似文献   

12.
Non-proportional hazards (NPH) have been observed in many immuno-oncology clinical trials. Weighted log-rank tests (WLRT) with suitable weights can be used to improve the power of detecting the difference between survival curves in the presence of NPH. However, it is not easy to choose a proper WLRT in practice. A versatile max-combo test was proposed to achieve the balance of robustness and efficiency, and has received increasing attention recently. Survival trials often warrant interim analyses due to their high cost and long durations. The integration and implementation of max-combo tests in interim analyses often require extensive simulation studies. In this report, we propose a simulation-free approach for group sequential designs with the max-combo test in survival trials. The simulation results support that the proposed method can successfully control the type I error rate and offer excellent accuracy and flexibility in estimating sample sizes, with light computation burden. Notably, our method displays strong robustness towards various model misspecifications and has been implemented in an R package.  相似文献   

13.
There is currently much interest in the use of surrogate endpoints in clinical trials and intermediate endpoints in epidemiology. Freedman et al. [Statist. Med. 11 (1992) 167] proposed the use of a validation ratio for judging the evidence of the validity of a surrogate endpoint. The method involves calculation of a confidence interval for the ratio. In this paper, I compare through computer simulations the performance of Fieller's method with the delta method for this calculation. In typical situations, the numerator and denominator of the ratio are highly correlated. I find that the Fieller method is superior to the delta method in coverage properties and in statistical power of the validation test. In addition, the formula for predicting statistical power seems to be much more accurate for the Fieller method than for the delta method. The simulations show that the role of validation analysis is likely to be limited in evaluating the reliability of using surrogate endpoints in clinical trials; however, it is likely to be a useful tool in epidemiology for identifying intermediate endpoints.  相似文献   

14.
In this paper we evaluate the performance of three methods for testing the existence of a unit root in a time series, when the models under consideration in the null hypothesis do not display autocorrelation in the error term. In such cases, simple versions of the Dickey-Fuller test should be used as the most appropriate ones instead of the known augmented Dickey-Fuller or Phillips-Perron tests. Through Monte Carlo simulations we show that, apart from a few cases, testing the existence of a unit root we obtain actual type I error and power very close to their nominal levels. Additionally, when the random walk null hypothesis is true, by gradually increasing the sample size, we observe that p-values for the drift in the unrestricted model fluctuate at low levels with small variance and the Durbin-Watson (DW) statistic is approaching 2 in both the unrestricted and restricted models. If, however, the null hypothesis of a random walk is false, taking a larger sample, the DW statistic in the restricted model starts to deviate from 2 while in the unrestricted model it continues to approach 2. It is also shown that the probability not to reject that the errors are uncorrelated, when they are indeed not correlated, is higher when the DW test is applied at 1% nominal level of significance.  相似文献   

15.
Abstract

Profile monitoring is applied when the quality of a product or a process can be determined by the relationship between a response variable and one or more independent variables. In most Phase II monitoring approaches, it is assumed that the process parameters are known. However, it is obvious that this assumption is not valid in many real-world applications. In fact, the process parameters should be estimated based on the in-control Phase I samples. In this study, the effect of parameter estimation on the performance of four Phase II control charts for monitoring multivariate multiple linear profiles is evaluated. In addition, since the accuracy of the parameter estimation has a significant impact on the performance of Phase II control charts, a new cluster-based approach is developed to address this effect. Moreover, we evaluate and compare the performance of the proposed approach with a previous approach in terms of two metrics, average of average run length and its standard deviation, which are used for considering practitioner-to-practitioner variability. In this approach, it is not necessary to know the distribution of the chart statistic. Therefore, in addition to ease of use, the proposed approach can be applied to other type of profiles. The superior performance of the proposed method compared to the competing one is shown in terms of all metrics. Based on the results obtained, our method yields less bias with small-variance Phase I estimates compared to the competing approach.  相似文献   

16.
In a variety of settings, it is desirable to display a collection of likelihoods over a common interval. One approach is simply to superimpose the likelihood curves. However, where there are more than a handful of curves, such displays are extremely difficult to decipher. An alternative is simply to display a point estimate with a confidence interval, corresponding to each likelihood. However, these may be inadequate when the likelihood is not approximately normal, as can occur with small sample sizes or nonlinear models. A second dimension is needed to gauge the relative plausibility of different parameter values. We introduce the raindrop plot, a shaded figure over the range of parameter values having log-likelihood greater than some cutoff, with height varying proportional to the difference between the log-likelihood and the cutoff. In the case of a normal likelihood, this produces a reflected parabola so that deviations from normality can be easily detected. An analogue of the raindrop plot can also be used to display estimated random effect distributions, posterior distributions, and predictive distributions.  相似文献   

17.
Students of statistics should be taught the ideas and methods that are widely used in practice and that will help them understand the world of statistics. Today, this means teaching them about Bayesian methods. In this article, I present ideas on teaching an undergraduate Bayesian course that uses Markov chain Monte Carlo and that can be a second course or, for strong students, a first course in statistics.  相似文献   

18.
距离判别理论中,通常采用重心距离来定义类与类之间的距离对待判样品进行判别。对新样品实行判别,将其归入系统聚类形成的分类,如果仍采用重心距离判别法,会由于没有与原有聚类时所用的类与类之间的距离相一致而产生误判。提出对基于系统聚类分类结果的距离判别理论方法的补充,把系统聚类中的八种类与类之间距离的概念引入到距离判别方法中。从而使距离判别中类与类距离的定义与系统聚类中相一致,通过实例分析,证明增强了距离判别的可靠性。  相似文献   

19.
Compared with most of the existing phase I designs, the recently proposed calibration-free odds (CFO) design has been demonstrated to be robust, model-free, and easy to use in practice. However, the original CFO design cannot handle late-onset toxicities, which have been commonly encountered in phase I oncology dose-finding trials with targeted agents or immunotherapies. To account for late-onset outcomes, we extend the CFO design to its time-to-event (TITE) version, which inherits the calibration-free and model-free properties. One salient feature of CFO-type designs is to adopt game theory by competing three doses at a time, including the current dose and the two neighboring doses, while interval-based designs only use the data at the current dose and is thus less efficient. We conduct comprehensive numerical studies for the TITE-CFO design under both fixed and randomly generated scenarios. TITE-CFO shows robust and efficient performances compared with interval-based and model-based counterparts. As a conclusion, the TITE-CFO design provides robust, efficient, and easy-to-use alternatives for phase I trials when the toxicity outcome is late-onset.  相似文献   

20.
In recent years the focus of research in survey sampling has changed to include a number of nontraditional topics such as nonsampling errors. In addition, the availability of data from large-scale sample surveys, along with computers and software to analyze the data, have changed the tools needed by survey sampling statisticians. It has also resulted in a diverse group of secondary data users who wish to learn how to analyze data from a complex survey. Thus it is time to reassess what we should be teaching students about survey sampling. This article brings together a panel of experts on survey sampling and teaching to discuss their views on what should be taught in survey sampling classes and how it should be taught.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号