首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
The time it takes to recruit patients into a clinical trial has a major impact on whether a drug development programme completes on time. Here Byron Jones explains how simple statistical models can be very useful in predicting the time to complete recruitment.  相似文献   

2.
The discussion on the use and misuse of p-values in 2016 by the American Statistician Association was a timely assertion that statistical concept should be properly used in science. Some researchers, especially the economists, who adopt significance testing and p-values to report their results, may felt confused by the statement, leading to misinterpretations of the statement. In this study, we aim to re-examine the accuracy of the p-value and introduce an alternative way for testing the hypothesis. We conduct a simulation study to investigate the reliability of the p-value. Apart from investigating the performance of p-value, we also introduce some existing approaches, Minimum Bayes Factors and Belief functions, for replacing p-value. Results from the simulation study confirm unreliable p-value in some cases and that our proposed approaches seem to be useful as the substituted tool in the statistical inference. Moreover, our results show that the plausibility approach is more accurate for making decisions about the null hypothesis than the traditionally used p-values when the null hypothesis is true. However, the MBFs of Edwards et al. [Bayesian statistical inference for psychological research. Psychol. Rev. 70(3) (1963), pp. 193–242]; Vovk [A logic of probability, with application to the foundations of statistics. J. Royal Statistical Soc. Series B (Methodological) 55 (1993), pp. 317–351] and Sellke et al. [Calibration of p values for testing precise null hypotheses. Am. Stat. 55(1) (2001), pp. 62–71] provide more reliable results compared to all other methods when the null hypothesis is false.KEYWORDS: Ban of P-value, Minimum Bayes Factors, belief functions  相似文献   

3.
4.
Point process models are a natural approach for modelling data that arise as point events. In the case of Poisson counts, these may be fitted easily as a weighted Poisson regression. Point processes lack the notion of sample size. This is problematic for model selection, because various classical criteria such as the Bayesian information criterion (BIC) are a function of the sample size, n, and are derived in an asymptotic framework where n tends to infinity. In this paper, we develop an asymptotic result for Poisson point process models in which the observed number of point events, m, plays the role that sample size does in the classical regression context. Following from this result, we derive a version of BIC for point process models, and when fitted via penalised likelihood, conditions for the LASSO penalty that ensure consistency in estimation and the oracle property. We discuss challenges extending these results to the wider class of Gibbs models, of which the Poisson point process model is a special case.  相似文献   

5.
The purpose of this article is to explain cross-validation and describe its use in regression. Because replicability analyses are not typically employed in studies, this is a topic with which many researchers may not be familiar. As a result, researchers may not understand how to conduct cross-validation in order to evaluate the replicability of their data. This article not only explains the purpose of cross-validation, but also uses the widely available Holzinger and Swineford (1939 Holzinger, K.J., Swineford, F. (1939). A Study in Factor Analysis: The Stability of a Bi-Factor Solution. Chicago, IL: University of Chicago. Available at: http://people.cehd.tamu.edu/~bthompson/datasets.htm [Google Scholar]) dataset as a heuristic example to concretely demonstrate its use. By incorporating multiple tables and examples of SPSS syntax and output, the reader is provided with additional visual examples in order to further clarify the steps involved in conducting cross-validation. A brief discussion of the limitations of cross-validation is also included. After reading this article, the reader should have a clear understanding of cross-validation, including when it is appropriate to use, and how it can be used to evaluate replicability in regression.  相似文献   

6.
A robust test of a parameter while in the presence of nuisance parameters was proposed by Wang (1981). The test procedure is a robust extension of the optimal C(α) tests. A numerical method for computing the solution of the orthogonality condition that is required by the test procedure is provided. An example on the testing of normal scale while in the presence of outliers is worked out to illustrate the construction of the robust test.  相似文献   

7.
Have supermarkets driven out the small shopkeeper? Are high-streets dying? Has supermarket power doomed the convenience store? Do local planning decisions restrict high-street competition? Can only the giants survive? Frederick Wheeler explains how the Competition Commission tries to give buyers a choice.  相似文献   

8.
The author examines whether the unexpectedly high number of births recorded in Poland in 1982 and 1983 is evidence of a change in fertility patterns. It is suggested that the increase in the gross reproduction rate that occurred was due to lower standards of living and fewer opportunities to acquire material possessions or travel abroad as an alternative to having children. Some of the increase may also be due to new pro-natalist measures such as prolongation of paid leave of absence for mothers. The author suggests that the increase in fertility is temporary and that fertility will soon decline to its former level.  相似文献   

9.
We contribute to the discussion of an article where Andrea Cerioli, Marco Riani, Anthony Atkinson and Aldo Corbellini review the advantages of analyzing multivariate data by monitoring how the estimated model parameters change as the estimation parameters vary. The focus is on robust methods and their sensitivity to the nominal efficiency and breakdown point. In congratulating with the authors for the clear and stimulating exposition, we contribute to its discussion with an overview of what we experienced in applying the monitoring in our application domain.  相似文献   

10.
11.
These are comments on the invited paper “The power of monitoring: How to make the most of a contaminated multivariate sample” by Andrea Cerioli, Marco Riani, Anthony Atkinson and Aldo Corbellini.  相似文献   

12.
13.
In this paper we investigate the behaviour of a ratio of random variables and try to find conditions under which, this expectation equals the ratio of expectations.  相似文献   

14.
We present a statistical methodology for fitting time‐varying rankings, by estimating the strength parameters of the Plackett–Luce multiple comparisons model at regularly spaced times for each ranked item. We use the little‐known method of barycentric rational interpolation to interpolate between the strength parameters so that a competitor's strength can be evaluated at any time. We chose the time‐varying strengths to evolve deterministically rather than stochastically, a preference that we reason often has merit. There are many statistical and computational problems to overcome on fitting anything beyond ‘toy’ data sets. The methodological innovations here include a method for maximizing a likelihood function for many parameters, approximations for modelling tied data and an approach to the elimination of secular drift of the estimated ‘strengths’. The methodology has obvious applications to fields such as marketing, although we demonstrate our approach by analysing a large data set of golf tournament results, in search of an answer to the question ‘who is the greatest golfer of all time?’  相似文献   

15.
In this article, we analyze the performance of five estimation methods for the long memory parameter d. The goal of our article is to construct a wavelet estimate for the fractional differencing parameter in nonstationary long memory processes that dominate the well-known estimate of Shimotsu and Phillips (2005) Shimotsu, K., Phillips, P. (2005). Exact local whittle estimation of fractional integration. Annals of statistics 20:87127. [Google Scholar]. The simulation results show that the wavelet estimation method of Lee (2005) Lee, J. (2005). Estimating memory parameter in the US inflation rate. Economics Letters 87:207210. [Google Scholar] with several tapering techniques performs better under most cases in nonstationary long memory. The comparison is based on the empirical root mean squared error of each estimate.  相似文献   

16.
This paper studies the impact of jumps on volatility estimation and inference based on various realised variation measures such as realised variance, realised multipower variation and truncated realised multipower variation. We review the asymptotic theory of those realised variation measures and present a new estimator for the asymptotic ‘variance’ of the centered realised variance in the presence of jumps. Next, we compare the finite sample performance of the various estimators by means of detailed Monte Carlo studies. Here we study the impact of the jump activity, of the jump size of the jumps in the price and of the presence of additional independent or dependent jumps in the volatility. We find that the finite sample performance of realised variance and, in particular, of log-transformed realised variance is generally good, whereas the jump-robust statistics tend to struggle in the presence of a highly active jump process.  相似文献   

17.
The Earth is full of life. If life evolved here, why not elsewhere? The Universe is a big place and our galaxy has many stars with planets. So are we alone? What is out there? And how do we know? Mark Burchell looks at the probability of life beyond our planet.  相似文献   

18.
The exact mean-squared error (MSE) of estimators of the variance in nonparametric regression based on quadratic forms is investigated. In particular, two classes of estimators are compared: Hall, Kay and Titterington's optimal difference-based estimators and a class of ordinary difference-based estimators which generalize methods proposed by Rice and Gasser, Sroka and Jennen-Steinmetz. For small sample sizes the MSE of the first estimator is essentially increased by the magnitude of the integrated first two squared derivatives of the regression function. It is shown that in many situations ordinary difference-based estimators are more appropriate for estimating the variance, because they control the bias much better and hence have a much better overall performance. It is also demonstrated that Rice's estimator does not always behave well. Data-driven guidelines are given to select the estimator with the smallest MSE.  相似文献   

19.
In the 1960s, W. B. Rosen conducted some remarkable experiments on unidirectional fibrous composites that gave seminal insights into their failure under increasing tensile load. These insights led him to a grid system where the nodes in the grid were ineffective   length fibers and to model the composite as something he called a chainofbundles model (i.e., a series system of parallel subsystems of horizontal nodes that he referred to as bundles), where the chain fails when one of the bundles fails. A load‐sharing rule was used to quantify how the load is borne among the nodes. Here, Rosen's experiments are analyzed to determine the shape of a bundle. The analysis suggests that the bundles are not horizontal collection of nodes but rather small rectangular grid systems of nodes where the load‐sharing between nodes is local in its form. In addition, a Gibbs measure representation for the joint distribution of binary random variables is given. This is used to show how the system reliability for a reliability structure can be obtained from the partition function for the Gibbs measure and to illustrate how to assess the risk of failure of a bundle in the chain‐of‐bundle model.  相似文献   

20.
It is shown that the limiting distribution of the augmented Dickey–Fuller (ADF) test under the null hypothesis of a unit root is valid under a very general set of assumptions that goes far beyond the linear AR(∞) process assumption typically imposed. In essence, all that is required is that the error process driving the random walk possesses a continuous spectral density that is strictly positive. Furthermore, under the same weak assumptions, the limiting distribution of the ADF test is derived under the alternative of stationarity, and a theoretical explanation is given for the well-known empirical fact that the test's power is a decreasing function of the chosen autoregressive order p. The intuitive reason for the reduced power of the ADF test is that, as p tends to infinity, the p regressors become asymptotically collinear.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号