首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
We consider delays that occur in the reporting of events such as cases of a reportable disease or insurance claims. Estimation of the number of events that have occurred but not yet been reported (OBNR events) is then important. Current methods of doing this do not allow random temporal fluctuations in reporting delays, and consequently, confidence or prediction limits on OBNR events tend to be too narrow. We develop an approach that uses recent reporting data and incorporates random effects, thus leading to more reasonable and robust predictions  相似文献   

2.
Summary A measurement error model is a regression model with (substantial) measurement errors in the variables. Disregarding these measurement errors in estimating the regression parameters results in asymptotically biased estimators. Several methods have been proposed to eliminate, or at least to reduce, this bias, and the relative efficiency and robustness of these methods have been compared. The paper gives an account of these endeavors. In another context, when data are of a categorical nature, classification errors play a similar role as measurement errors in continuous data. The paper also reviews some recent advances in this field. This work was supported by the Deutsche Forschungsgemeinschaft (DFG) within the frame of the Sonderforschungsbereich SFB 386. We thank two anonymous referees for their helpful comments.  相似文献   

3.
This paper reviews Bayesian methods that have been developed in recent years to estimate and evaluate dynamic stochastic general equilibrium (DSGE) models. We consider the estimation of linearized DSGE models, the evaluation of models based on Bayesian model checking, posterior odds comparisons, and comparisons to vector autoregressions, as well as the non-linear estimation based on a second-order accurate model solution. These methods are applied to data generated from correctly specified and misspecified linearized DSGE models and a DSGE model that was solved with a second-order perturbation method.  相似文献   

4.
Finite mixtures of multivariate skew t (MST) distributions have proven to be useful in modelling heterogeneous data with asymmetric and heavy tail behaviour. Recently, they have been exploited as an effective tool for modelling flow cytometric data. A number of algorithms for the computation of the maximum likelihood (ML) estimates for the model parameters of mixtures of MST distributions have been put forward in recent years. These implementations use various characterizations of the MST distribution, which are similar but not identical. While exact implementation of the expectation-maximization (EM) algorithm can be achieved for ‘restricted’ characterizations of the component skew t-distributions, Monte Carlo (MC) methods have been used to fit the ‘unrestricted’ models. In this paper, we review several recent fitting algorithms for finite mixtures of multivariate skew t-distributions, at the same time clarifying some of the connections between the various existing proposals. In particular, recent results have shown that the EM algorithm can be implemented exactly for faster computation of ML estimates for mixtures with unrestricted MST components. The gain in computational time is effected by noting that the semi-infinite integrals on the E-step of the EM algorithm can be put in the form of moments of the truncated multivariate non-central t-distribution, similar to the restricted case, which subsequently can be expressed in terms of the non-truncated form of the central t-distribution function for which fast algorithms are available. We present comparisons to illustrate the relative performance of the restricted and unrestricted models, and demonstrate the usefulness of the recently proposed methodology for the unrestricted MST mixture, by some applications to three real datasets.  相似文献   

5.
The use of covariates in block designs is necessary when the covariates cannot be controlled like the blocking factor in the experiment. In this paper, we consider the situation where there is some flexibility for selection in the values of the covariates. The choice of values of the covariates for a given block design attaining minimum variance for estimation of each of the parameters has attracted attention in recent times. Optimum covariate designs in simple set-ups such as completely randomised design (CRD), randomised block design (RBD) and some series of balanced incomplete block design (BIBD) have already been considered. In this paper, optimum covariate designs have been considered for the more complex set-ups of different partially balanced incomplete block (PBIB) designs, which are popular among practitioners. The optimum covariate designs depend much on the methods of construction of the basic PBIB designs. Different combinatorial arrangements and tools such as orthogonal arrays, Hadamard matrices and different kinds of products of matrices viz. Khatri–Rao product, Kronecker product have been conveniently used to construct optimum covariate designs with as many covariates as possible.  相似文献   

6.
In recent years there have been notable advances in the methodology for analyzing seasonal time series. This paper summarizes some recent research on seasonal adjustment problems and procedures. Included are signal-extraction methods based on autoregressive integrated moving average (ARIMA) models, improvements in X–11, revisions in preliminary seasonal factors, regression and other model-based methods, robust methods, seasonal model identification, aggregation, interrelating seasonally adjusted series, and causal approaches to seasonal adjustment.  相似文献   

7.
Ever since Professor Bancroft developed inference procedures using preliminary tests there has been a lot of research in this area by various authors across the world. This could be evidenced from two papers that widely reviewed the publications on preliminary test-based statistical methods. The use of preliminary tests in solving doubts arising over the model parameters has gained momentum as it has proven to be effective and powerful over to that of classical methods. Unfortunately, there has been a downward trend in research related to preliminary tests as it could be seen from only few recent publications. Obviously, the benefits of preliminary test-based statistical methods did not reach Six Sigma practitioners as the concept of Six Sigma just took off and it was in a premature state. In this paper, efforts have been made to present a review of the publications on the preliminary test-based statistical methods. Though studies on preliminary test-based methods have been done in various areas of statistics such as theory of estimation, hypothesis testing, analysis of variance, regression analysis, reliability, to mention a few, only few important methods are presented here for the benefit of readers, particularly Six Sigma quality practitioners, to understand the concept. In this regard, the define, measure, analyze, improve and control methodology of six sigma is presented with a link of analyze phase to preliminary test-based statistical methods. Examples are also given to illustrate the procedures.  相似文献   

8.
Summary. A review of methods suggested in the literature for sequential detection of changes in public health surveillance data is presented. Many researchers have noted the need for prospective methods. In recent years there has been an increased interest in both the statistical and the epidemiological literature concerning this type of problem. However, most of the vast literature in public health monitoring deals with retrospective methods, especially spatial methods. Evaluations with respect to the statistical properties of interest for prospective surveillance are rare. The special aspects of prospective statistical surveillance and different ways of evaluating such methods are described. Attention is given to methods that include only the time domain as well as methods for detection where observations have a spatial structure. In the case of surveillance of a change in a Poisson process the likelihood ratio method and the Shiryaev–Roberts method are derived.  相似文献   

9.
Penalized methods for variable selection such as the Smoothly Clipped Absolute Deviation penalty have been increasingly applied to aid variable section in regression analysis. Much of the literature has focused on parametric models, while a few recent studies have shifted the focus and developed their applications for the popular semi-parametric, or distribution-free, generalized estimating equations (GEEs) and weighted GEE (WGEE). However, although the WGEE is composed of one main and one missing-data module, available methods only focus on the main module, with no variable selection for the missing-data module. In this paper, we develop a new approach to further extend the existing methods to enable variable selection for both modules. The approach is illustrated by both real and simulated study data.  相似文献   

10.
Missing data, and the bias they can cause, are an almost ever‐present concern in clinical trials. The last observation carried forward (LOCF) approach has been frequently utilized to handle missing data in clinical trials, and is often specified in conjunction with analysis of variance (LOCF ANOVA) for the primary analysis. Considerable advances in statistical methodology, and in our ability to implement these methods, have been made in recent years. Likelihood‐based, mixed‐effects model approaches implemented under the missing at random (MAR) framework are now easy to implement, and are commonly used to analyse clinical trial data. Furthermore, such approaches are more robust to the biases from missing data, and provide better control of Type I and Type II errors than LOCF ANOVA. Empirical research and analytic proof have demonstrated that the behaviour of LOCF is uncertain, and in many situations it has not been conservative. Using LOCF as a composite measure of safety, tolerability and efficacy can lead to erroneous conclusions regarding the effectiveness of a drug. This approach also violates the fundamental basis of statistics as it involves testing an outcome that is not a physical parameter of the population, but rather a quantity that can be influenced by investigator behaviour, trial design, etc. Practice should shift away from using LOCF ANOVA as the primary analysis and focus on likelihood‐based, mixed‐effects model approaches developed under the MAR framework, with missing not at random methods used to assess robustness of the primary analysis. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

11.
There have been many backtesting methods proposed for value at risk. Yet they have rarely been applied in practice. Here, we provide a comprehensive review of all of the recent backtesting methods for VaR. This review could encourage applications and also the development of further backtesting methods.  相似文献   

12.
Fan J  Lv J 《Statistica Sinica》2010,20(1):101-148
High dimensional statistical problems arise from diverse fields of scientific research and technological development. Variable selection plays a pivotal role in contemporary statistical learning and scientific discoveries. The traditional idea of best subset selection methods, which can be regarded as a specific form of penalized likelihood, is computationally too expensive for many modern statistical applications. Other forms of penalized likelihood methods have been successfully developed over the last decade to cope with high dimensionality. They have been widely applied for simultaneously selecting important variables and estimating their effects in high dimensional statistical inference. In this article, we present a brief account of the recent developments of theory, methods, and implementations for high dimensional variable selection. What limits of the dimensionality such methods can handle, what the role of penalty functions is, and what the statistical properties are rapidly drive the advances of the field. The properties of non-concave penalized likelihood and its roles in high dimensional statistical modeling are emphasized. We also review some recent advances in ultra-high dimensional variable selection, with emphasis on independence screening and two-scale methods.  相似文献   

13.
Nearest Neighbor Adjusted Best Linear Unbiased Prediction   总被引:1,自引:0,他引:1  
Statistical inference for linear models has classically focused on either estimation or hypothesis testing of linear combinations of fixed effects or of variance components for random effects. A third form of inference—prediction of linear combinations of fixed and random effects—has important advantages over conventional estimators in many applications. None of these approaches will result in accurate inference if the data contain strong, unaccounted for local gradients, such as spatial trends in field-plot data. Nearest neighbor methods to adjust for such trends have been widely discussed in recent literature. So far, however, these methods have been developed exclusively for classical estimation and hypothesis testing. In this article a method of obtaining nearest neighbor adjusted (NNA) predictors, along the lines of “best linear unbiased prediction,” or BLUP, is developed. A simulation study comparing “NNABLUP” to conventional NNA methods and to non-NNA alternatives suggests considerable potential for improved efficiency.  相似文献   

14.
Abstract

An aspect of cluster analysis which has been widely studied in recent years is the weighting and selection of variables. Procedures have been proposed which are able to identify the cluster structure present in a data matrix when that structure is confined to a subset of variables. Other methods assess the relative importance of each variable as revealed by a suitably chosen weight. But when a cluster structure is present in more than one subset of variables and is different from one subset to another, those solutions as well as standard clustering algorithms can lead to misleading results. Some very recent methodologies for finding consensus classifications of the same set of units can be useful also for the identification of cluster structures in a data matrix, but each one seems to be only partly satisfactory for the purpose at hand. Therefore a new more specific procedure is proposed and illustrated by analyzing two real data sets; its performances are evaluated by means of a simulation experiment.  相似文献   

15.
A number of recent studies have looked at the coverage probabilities of various common parametric methods of interval estimation of the median effective dose (ED 50 ) for a logistic dose-response curve. There has been comparatively little work done on more extreme effective doses. In this paper, the interval estimation of the 90% effective dose (ED 90 ) will be of principal interest. We provide a comparison of four parametric methods of interval construction with four methods based on bootstrap resampling.  相似文献   

16.
In present days it is commonly recognized that firm production datasets are affected by some level of random perturbation, and that consequently production frontiers have a stochastic nature. Mathematical programming methods, traditionally employed for frontier evaluation, are then reputed capable of mistaking errors for technical (in)efficiency. Therefore, recent literature is oriented towards a statistical view: frontiers are designed by enveloping data that have been preliminarly filtered from noise.In this paper a nonparametric smoother for filtering panel production data is presented. We pursue a recent approach of Kneip and Simar (1996), and frame it into a more general formulation whose a setting constitutes our specific proposal. The major feature of the method is that noise reduction and outlier detection are faced separately: i) a high order local polynomial fit is used as smoother; and ii) data are weighted by robustness scores. An extensive numerical study on some common production models yields encouraging results from a competition with Kneip and Simars filter.  相似文献   

17.
The use of covariates in block designs is necessary when the experimental errors cannot be controlled using only the qualitative factors. The choice of values of the covariates for a given set-up attaining minimum variance for estimation of the regression parameters has attracted attention in recent times. In this paper, optimum covariate designs (OCD) have been considered for the set-up of the balanced treatment incomplete block (BTIB) designs, which form an important class of test-control designs. It is seen that the OCDs depend much on the methods of construction of the basic BTIB designs. The series of BTIB designs considered in this paper are mainly those as described by Bechhofer and Tamhane (1981) and Das et al. (2005). Different combinatorial arrangements and tools such as Hadamard matrices and different kinds of products of matrices viz Khatri-Rao product and Kronecker product have been conveniently used to construct OCDs with as many covariates as possible.  相似文献   

18.
The survey related to stigmatized characteristics leads to the non-response problem if it is conducted according to classical (direct) methods, especially, developed for non-sensitive issues; therefore, it needs to be applied appropriate survey methodology to get a reliable response from respondents in incriminating issues. Randomized response model is one of the most recent methods which is attracting the attention of survey practitioners to deal with the problems of non-response because it protects the privacy of individuals in order to acquire the truthful response. The present work proposes a new two-stage randomized response model to get rid of misleading response or non-response due to the stigmatized nature of attribute under the study. The proposed randomized response model results in the unbiased estimator of population proportion possessing the sensitive attribute. The properties of the resultant estimator have been studied and empirical comparisons are performed to show its dominance over existing estimators. Suitable recommendations have been put forward to the survey practitioners.  相似文献   

19.
Different methodologies for fault diagnosis in multivariate quality control have been proposed in recent years. These methods work in the space of the original measured variables and have performed reasonably well when there is a reduced number of mildly correlated quality and/or process variables with a well-conditioned covariance matrix. These approaches have been introduced by emphasizing their positive or negative virtues, generally on an individual basis, so it is not clear for the practitioner the best method to be used. This article provides a comprehensive study of the performance of diverse methodological approaches when tested on a large number of distinct simulated scenarios. Our primary aim is to highlight key weaknesses and strengths in these methods as well as clarifying their relationships and the requirements for their implementation in practice.  相似文献   

20.
Generalized method of moments (GMM) estimation has become an important unifying framework for inference in econometrics in the last 20 years. It can be thought of as encompassing almost all of the common estimation methods, such as maximum likelihood, ordinary least squares, instrumental variables, and two-stage least squares, and nowadays is an important part of all advanced econometrics textbooks. The GMM approach links nicely to economic theory where orthogonality conditions that can serve as such moment functions often arise from optimizing behavior of agents. Much work has been done on these methods since the seminal article by Hansen, and much remains in progress. This article discusses some of the developments since Hansen's original work. In particular, it focuses on some of the recent work on empirical likelihood–type estimators, which circumvent the need for a first step in which the optimal weight matrix is estimated and have attractive information theoretic interpretations.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号