首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
ABSTRACT

In this paper we consider correlation-type tests based on plotting points which are modifications to the simultaneous closeness probability plotting points as recently introduced in the literature. In particular, we consider a maximal correlation test and a minimal correlation test. Furthermore, we provide two methods to carry out each test, where one method uses plotting points which are data dependent and the other test uses plotting points which are not. Some numerical properties on the associated correlation statistics are provided for various distributions, as well as a comprehensive power study to assess their performance in comparison to correlation-type tests based on more traditional plotting points. Two illustrative examples are also provided to demonstrate the tests. Finally, we make some observations and provide ideas for future work.  相似文献   

2.
The quantile–quantile plot is widely used to check normality. The plot depends on the plotting positions. Many commonly used plotting positions do not depend on the sample values. We propose an adaptive plotting position that depends on the relative distances of the two neighbouring sample values. The correlation coefficient obtained from the adaptive plotting position is used to test normality. The test using the adaptive plotting position is better than the Shapiro–Wilk W test for small samples and has larger power than Hazen's and Blom's plotting positions for symmetric alternatives with shorter tail than normal and skewed alternatives when n is 20 or larger. The Brown–Hettmansperger T* test is designed for detecting bad tail behaviour, so it does not have power for symmetric alternatives with shorter tail than normal, but it is generally better than the other tests when β2 is greater than 3.25.  相似文献   

3.
Probability paper was used as early as 1896, and was mentioned in the literature more than 30 times before 1950, mainly by hydrologists, most of whom used the plotting position (i-0.5)/n proposed by Hazen (1914). Gumbel (1942a) considered the modal position (i-1)/(n-1) and the mean position i/(n+1) [the latter proposed by Weibull (1939a,b)], and chose the latter. Lebedev (1952) and others proposed the use of (i-0.3)/(n+0.4), which is approximately the median position advocated by Johnson (1951). Blom (1958) sug-gested (i-α)/(n-2α+1), where a is a constant (usually 0 ≤ α ≤ 1), which includes all of the above plotting positions as special cases. Moreover, by proper choice of α, one can approximate F[E(xi)], the position proposed by Kimball (1946), for any distri-bution of interest. Gumbel (1954) stated five postulates which plotting positions should satisfy. Chernoff & Lieberman (1954) discussed the optimum choice of plotting positions in various situ-ations. It is clear that the optimum plotting position depends on the use that is to be made of the results and may also depend on the underlying distribution. The author endeavors to formulate recommendations as to the best choice in various situations.  相似文献   

4.
ABSTRACT

This work presents advanced computational aspects of a new method for changepoint detection on spatio-temporal point process data. We summarize the methodology, based on building a Bayesian hierarchical model for the data and declaring prior conjectures on the number and positions of the changepoints, and show how to take decisions regarding the acceptance of potential changepoints. The focus of this work is about choosing an approach that detects the correct changepoint and delivers smooth reliable estimates in a feasible computational time; we propose Bayesian P-splines as a suitable tool for managing spatial variation, both under a computational and a model fitting performance perspective. The main computational challenges are outlined and a solution involving parallel computing in R is proposed and tested on a simulation study. An application is also presented on a data set of seismic events in Italy over the last 20 years.  相似文献   

5.
In this article, it is explicitly demonstrated that the probability of non exceedance of the mth value in n order ranked events equals m/(n + 1). Consequently, the plotting position in the extreme value analysis should be considered not as an estimate, but to be equal to m/(n + 1), regardless of the parent distribution and the application. The many other suggested plotting formulas and numerical methods to determine them should thus be abandoned. The article is intended to mark the end of the century-long controversial discussion on the plotting positions.  相似文献   

6.
The use of the correlation coefficient is suggested as a technique for summarizing and objectively evaluating the information contained in probability plots. Goodness-of-fit tests are constructed using this technique for several commonly used plotting positions for the normal distribution. Empirical sampling methods are used to construct the null distribution for these tests, which are then compared on the basis of power against certain nonnormal alternatives. Commonly used regression tests of fit are also included in the comparisons. The results indicate that use of the plotting position pi = (i - .375)/(n + .25) yields a competitive regression test of fit for normality.  相似文献   

7.
Abstract

As a selector, have you ever wondered if the resource you requested was ordered? As an acquisitions staff member, are you struggling with keeping track of your order requests from various channels? As a manager, are you finding it challenging to monitor staff work? CORAL, an open source electronic resource management system, proved to be one solution to these concerns for North Carolina State University (NCSU) Libraries. This article discusses how to manage workflows in CORAL and outlines a NCSU initiative to evolve this tool through collaboration across departments and across the CORAL community.  相似文献   

8.
The aim of our paper is to elaborate a theoretical methodology based on the Malliavin calculus to calculate the following conditional expectation (Pt(Xt)|(Xs)) for st where the only state variable follows a J-process [Jerbi Y. A new closed-form solution as an extension of the Black—Scholes formula allowing smile curve plotting. Quant Finance. 2013; Online First Article. doi:10.1080/14697688.2012.762458]. The theoretical results are applied to the American option pricing, consisting of an extension of the work of Bally et al. [Pricing and hedging American options by Monte Carlo methods using a Malliavin calculus approach. Monte Carlo Methods Appl. 2005;11-2:97–133], as well as the J-process (with additional parameters λ and θ) is an extension of the Wiener process. The introduction of the aforesaid parameters induces skewness and kurtosis effects, i.e. smile curve allowing to fit with the reality of financial market. In his work Jerbi [Jerbi Y. A new closed-form solution as an extension of the Black–-Scholes formula allowing smile curve plotting. Quant Finance. 2013; Online First Article. doi:10.1080/14697688.2012.762458] showed that the use of the J-process is equivalent to the use of a stochastic volatility model based on the Wiener process as in Heston's. The present work consists on extending this result to the American options. We studied the influence of the parameters λ and θ on the American option price and we find empirical results fitting with the options theory.  相似文献   

9.
Probability plots allow us to determine whether a set of sample observations is distributed according to a theoretical distribution. Plotting positions are fundamental elements in statistics and, in particular, for the construction of probability plots. In this paper, a new plotting position to construct different probability plots, such as Q–Q Plot, P–P Plot and S–P Plot, is proposed. The proposed definition is based on the median of the ith order statistic of the theoretical distribution considered. The main feature of this plotting position formula is that it is independent of the theoretical distribution selected. Moreover, the procedure developed is ‘almost’ exact, reaching, without a high cost of time, an accuracy as great as we want, which avoids using approximations (proposed by other authors).  相似文献   

10.
ABSTRACT

To reduce the output variance, the variance-based importance analysis can provide an efficient way by reducing the variance of the ‘important’ inputs. But with the reduction of the variance of those ‘important’ inputs, the input importance will change and it is no longer the most efficient way to reduce the variance of those ‘important’ inputs alone. Thus, analyst needs to consider reducing the variance of other inputs to obtain a more efficient way. This work provides a graphical solution for analyst to decide how to reduce the input variance to achieve the targeted reduction of the output variance efficiently. Furthermore, by the importance sampling-based approach, the graphical solution can be obtained with only a single group of samples, which can decrease the computational cost greatly.  相似文献   

11.
This paper deals with the estimation of conditional quantiles in varying coefficient models by estimating the coefficients. Varying coefficient models are among popular models that have been proposed to alleviate the curse of dimensionality. Previous works on varying coefficient models deal with conditional means directly or indirectly. However, quantiles themselves can be defined without moment conditions and plotting several conditional quantiles would give us more understanding of the data than plotting just the conditional mean. Particularly, we estimate the conditional median by estimating varying coefficients by local L1 regression.  相似文献   

12.
13.
14.
ABSTRACT

A number of factors can make it difficult to consistently and accurately identify individual researchers and their scholarly activities. One solution is to use unique author identifiers. Many author identifiers are available from inside and outside the library community. This column will, in lay terms, compare Scopus Author ID, ResearcherID, ORCID identifier, and ISNI (International Standard Name Identifier)—some of the more commonly used author identifiers—and explore the advantages and disadvantages of each. It will examine the differences among the four identifiers and their relationships to one another.  相似文献   

15.
ABSTRACT

Efforts to address a reproducibility crisis have generated several valid proposals for improving the quality of scientific research. We argue there is also need to address the separate but related issues of relevance and responsiveness. To address relevance, researchers must produce what decision makers actually need to inform investments and public policy—that is, the probability that a claim is true or the probability distribution of an effect size given the data. The term responsiveness refers to the irregularity and delay in which issues about the quality of research are brought to light. Instead of relying on the good fortune that some motivated researchers will periodically conduct efforts to reveal potential shortcomings of published research, we could establish a continuous quality-control process for scientific research itself. Quality metrics could be designed through the application of this statistical process control for the research enterprise. We argue that one quality control metric—the probability that a research hypothesis is true—is required to address at least relevance and may also be part of the solution for improving responsiveness and reproducibility. This article proposes a “straw man” solution which could be the basis of implementing these improvements. As part of this solution, we propose one way to “bootstrap” priors. The processes required for improving reproducibility and relevance can also be part of a comprehensive statistical quality control for science itself by making continuously monitored metrics about the scientific performance of a field of research.  相似文献   

16.
ABSTRACT

In this paper, a numerical solution technique to stochastic partial differential equations in reliability engineering is presented. The method is based upon finite difference discretization of the governing equations for the Markovian reliability model. In realistic situations, the repair rates and failure rates of engineering system are variable. Such variable repair and failure rates are difficult to account in reliability modeling. The novelty in this work is to present a numerical method to easily take into consideration such variables and give an accurate prediction of reliability measures of engineering systems.  相似文献   

17.

A goodness-of-fit technique for random samples from the exponential distribution based on the sample Lorenz curve is adapted for use in the exponential order statistic (EOS) model. In the EOS model, only those observations in a random sample from the exponential distribution of unknown size N that are less than some known stopping time T are observable. The model is known as the Jelinski-Moranda model in software reliability, where it is used to estimate the number of bugs in software during development. Distributional results are derived for the distance between the sample Lorenz curve and the population Lorenz curve so that it can be used as a goodness-of-fit test statistic. Simulations show that the test has good power against several alternative distributions. Simulations also indicate that in some cases, model misspecification leads to poor parameter estimation. A plotting procedure provides a means of graphical assessment of fit.  相似文献   

18.
Abstract

This work deals with the problem of Bayesian estimation of the transition probabilities associated with multistate Markov chain. The model is based on the Jeffreys' noninformative prior. The Bayesian estimator is approximated by means of MCMC techniques. A numerical study by simulation is done in order to compare the Bayesian estimator with the maximum likelihood estimator.  相似文献   

19.

The linear mixed-effects model (Verbeke and Molenberghs, 2000) has become a standard tool for the analysis of continuous hierarchical data such as, for example, repeated measures or data from meta-analyses. However, in certain situations the model does pose insurmountable computational problems. Precisely this has been the experience of Buyse et al. (2000a) who proposed an estimation- and prediction-based approach for evaluating surrogate endpoints. Their approach requires fitting linear mixed models to data from several clinical trials. In doing so, these authors built on the earlier, single-trial based, work by Prentice (1989), Freedman et al. (1992), and Buyse and Molenberghs (1998). While Buyse et al. (2000a) claim their approach has a number of advantages over the classical single-trial methods, a solution needs to be found for the computational complexity of the corresponding linear mixed model. In this paper, we propose and study a number of possible simplifications. This is done by means of a simulation study and by applying the various strategies to data from three clinical studies: Pharmacological Therapy for Macular Degeneration Study Group (1977), Ovarian Cancer Meta-analysis Project (1991) and Corfu-A Study Group (1995).  相似文献   

20.
ABSTRACT

Calibration, also called inverse regression, is a classical problem which appears often in a regression setup under fixed design. The aim of this article is to propose a stochastic method which gives an estimated solution for a linear calibration problem. We establish exponential inequalities of Bernstein–Frechet type for the probability of the distance between the approximate solutions and the exact one. Furthermore, we build a confidence domain for the so-mentioned exact solution. To check the validity of our results, a numerical example is proposed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号