首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
Summary. A review of methods suggested in the literature for sequential detection of changes in public health surveillance data is presented. Many researchers have noted the need for prospective methods. In recent years there has been an increased interest in both the statistical and the epidemiological literature concerning this type of problem. However, most of the vast literature in public health monitoring deals with retrospective methods, especially spatial methods. Evaluations with respect to the statistical properties of interest for prospective surveillance are rare. The special aspects of prospective statistical surveillance and different ways of evaluating such methods are described. Attention is given to methods that include only the time domain as well as methods for detection where observations have a spatial structure. In the case of surveillance of a change in a Poisson process the likelihood ratio method and the Shiryaev–Roberts method are derived.  相似文献   

2.
The extreme value theory is very popular in applied sciences including finance, economics, hydrology and many other disciplines. In univariate extreme value theory, we model the data by a suitable distribution from the general max-domain of attraction characterized by its tail index; there are three broad classes of tails—the Pareto type, the Weibull type and the Gumbel type. The simplest and most common estimator of the tail index is the Hill estimator that works only for Pareto type tails and has a high bias; it is also highly non-robust in presence of outliers with respect to the assumed model. There have been some recent attempts to produce asymptotically unbiased or robust alternative to the Hill estimator; however all the robust alternatives work for any one type of tail. This paper proposes a new general estimator of the tail index that is both robust and has smaller bias under all the three tail types compared to the existing robust estimators. This essentially produces a robust generalization of the estimator proposed by Matthys and Beirlant (Stat Sin 13:853–880, 2003) under the same model approximation through a suitable exponential regression framework using the density power divergence. The robustness properties of the estimator are derived in the paper along with an extensive simulation study. A method for bias correction is also proposed with application to some real data examples.  相似文献   

3.
4.
5.

In this paper a new process capability index is proposed, which is based on the proportion of conformance of the process and has several appealing features. This index is simple in its assessment and interpretation and is applicable to normally or non-normally distributed processes. Likewise, its value can be assessed for continuous or discrete processes, it can be used under either unilateral or bilateral tolerances and the assessment of confidence limits for its true value is not very involved, under specific distributional assumptions. Point estimators and confidence limits for this index are investigated, assuming two very common continuous distributions (normal and exponential).  相似文献   

6.
Process capability indices (PCIs) are extensively used in the manufacturing industries in order to confirm whether the manufactured products meet their specifications or not. PCIs can be used to judge the process precision, process accuracy, and the process performance. So developing of sampling plans based on PCIs is inevitable and those plans will be very much useful for maintaining and improving the product quality in the manufacturing industries. In view of this, we propose a variables sampling system based on the process capability index Cpmk, which takes into account of process yield and process loss, when the quality characteristic under study will have double specification limits. The proposed sampling system will be effective in compliance testing. The advantages of this system over the existing sampling plans are also discussed. In order to determine the optimal parameters, tables are also constructed by formulating the problem as a nonlinear programming in which the average sample number is minimized by satisfying the producer and consumer risks.  相似文献   

7.
Summary In this paper likelihood is characterized as an index which measures how much a model fits a sample. Some properties required to an index of fit are introduced and discussed, while stressing how they describe aspects inner to idea of fit. Finally we prove that, if an index of fit is maximal when the model reaches the distribution of the sample, then such an index is an increasing continuous transform of , where thep i's are the theoretical relative frequencies provided by the model and theq i's are the actual relative frequencies of the sample.  相似文献   

8.
9.
A review is provided of the concept confidence distributions. Material covered include: fundamentals, extensions, applications of confidence distributions and available computer software. We expect that this review could serve as a source of reference and encourage further research with respect to confidence distributions.  相似文献   

10.
Air pollution is one of the most important global environmental issues. Taiwan Environmental Protection Agency (EPA) is currently using an Air Quality Index (AQI) to measure and monitor its national air quality. The main objective of this study is to assess the air quality per hour every month in Taichung City of Taiwan from 2014 to 2016 based on the nonconformance probability of the AQI index. The nonconformance probability is defined as the probability that a characteristic of interest falls outside of an acceptance region. A lower confidence bound for the nonconformance probability is applied to test whether the AQI index value exceeds a warning threshold, and then the government could issue warnings according to the decision made by such statistical inference. An unbalanced two-way random effects model is presented for fitting the AQI index values. We evaluate three different lower confidence bound construction methods, including a t-based, an adjusted t-based and a generalized pivotal quantity (GPQ) based methods, through a detailed simulation study. Finally, a hybrid method of the t-based and the adjusted t-based estimators is recommended for practical use.KEYWORDS: Conformance proportion, generalized confidence interval, generalized pivotal quantity, non-inferiority test, Student’s t-test  相似文献   

11.
Capability indices that qualify process potential and process performance are practical tools for successful quality improvement activities and quality program implementation. Most existing methods to assess process capability were derived on the basis of the traditional frequentist point of view. This paper considers the problem of estimating and testing process capability based on the third-generation capability index C pmk from the Bayesian point of view. We first derive the posterior probability p for the process under investigation is capable. The one-sided credible interval, a Bayesian analog of the classical lower confidence interval, can be obtained to assess process performance. To investigate the effectiveness of the derived results, a series of simulation was undertaken. The results indicate that the performance of the proposed Bayesian approach depends strongly on the value of ξ=(μ?T)/σ. It performs very well with the accurate coverage rate when μ is sufficiently far from T. In those cases, they have the same acceptable performance even though the sample size n is as small as 25.  相似文献   

12.
13.
Forecast methods for realized volatilities are reviewed. Basic theoretical and empirical features of realized volatilities as well as versions of estimators of realized volatility are briefly investigated. Major forecast models featuring the empirical aspects of persistency and asymmetry are discussed in terms of forecasting models for which the heterogeneous autoregressive (HAR) model is one of the most basic one in the recent literature. Forecast methods addressing the issues of jump, break, implied volatility, and market microstructure noise are reviewed. Forecasting realized covariance matrix is also considered.  相似文献   

14.
Summary One of the fundamental of mathematical statistics is the estimation of sampling characteristics of a random variable, a problem that is increasingly solved using bootstrap methods. Often these involve Monte Carlo simulation, but they may be costly and time-consuming in certain problems. Various methods for reducing the simulation cost in bootstrap simulations have been proposed, most of them applicable to simple random samples. Here we review the literature on efficient resampling methods, make comparisons, try to assess the best method for a particular problem.  相似文献   

15.
The field of nonparametric function estimation has broadened its appeal in recent years with an array of new tools for statistical analysis. In particular, theoretical and applied research on the field of wavelets has had noticeable influence on statistical topics such as nonparametric regression, nonparametric density estimation, nonparametric discrimination and many other related topics. This is a survey article that attempts to synthetize a broad variety of work on wavelets in statistics and includes some recent developments in nonparametric curve estimation that have been omitted from review articles and books on the subject. After a short introduction to wavelet theory, wavelets are treated in the familiar context of estimation of «smooth» functions. Both «linear» and «nonlinear» wavelet estimation methods are discussed and cross-validation methods for choosing the smoothing parameters are addressed. Finally, some areas of related research are mentioned, such as hypothesis testing, model selection, hazard rate estimation for censored data, and nonparametric change-point problems. The closing section formulates some promising research directions relating to wavelets in statistics.  相似文献   

16.
The notion of a sufficient statistic—a statistic that summarizes in itself all the relevant information in the sample x about the universal parameter ω—is acclaimed as one of the most significant discoveries of Sir Ronald A. Fisher. It is however not well-recognized that the related notion of a partially sufficient statistic—a statistic that isolates and exhausts all the relevant and usable information in the sample about a sub-parameter θ=θ(ω)—can be very elusive if the question is posed in sample space terms. In this review article, the author tries to unravel the mystery that surrounds the notion of partial sufficiency. For mathematical details on some of the issues raised here one may refer to Basu (1977).  相似文献   

17.
Financial stress index (FSI) is considered to be an important risk management tool to quantify financial vulnerabilities. This paper proposes a new framework based on a hybrid classifier model that integrates rough set theory (RST), FSI, support vector regression (SVR) and a control chart to identify stressed periods. First, the RST method is applied to select variables. The outputs are used as input data for FSI–SVR computation. Empirical analysis is conducted based on monthly FSI of the Federal Reserve Bank of Saint Louis from January 1992 to June 2011. A comparison study is performed between FSI based on the principal component analysis and FSI–SVR. A control chart based on FSI–SVR and extreme value theory is proposed to identify the extremely stressed periods. Our approach identified different stressed periods including internet bubble, subprime crisis and actual financial stress episodes, along with the calmest periods, agreeing with those given by Federal Reserve System reports.  相似文献   

18.
To seek the nonlinear structure hidden in data points of high-dimension, a transformation related to projection pursuit method and a projection index were proposed by Li (1989, 1990 ). In this paper, we present a consistent estimator of the supremum of the projection index based sliced inverse regression technique. This estimator also suggests a method to obtain approximately the most interesting projection in the general case.  相似文献   

19.
SUMMARY A two-sample version of the non-parametric index of tracking for longitudinal data introduced by Foulkes and Davis is described. The index is based on a multivariate U -statistic, and provides a measure of the stochastic ordering of the underlying growth curves of the samples. The utility of the U -statistic approach is explored with two applications related to growth curves and repeated measures analyses.  相似文献   

20.
Claims that the parameters of an econometric model are invariant under changes in either policy rules or expectations processes entail super exogeneity and encompassing implications. Super exogeneity is always potentially refutable, and when both implications are involved, the Lucas critique is also refutable. We review the methodological background; the applicability of the Lucas critique; super exogeneity tests; the encompassing implications of feedback and feedforward models; and the role of incomplete information. The approach is applied to money demand in the u.S.A. to examine constancy, exogeneity, and encompassing, and reveals the Lucas critique to be inapplicable to the model under analysis.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号