共查询到20条相似文献,搜索用时 15 毫秒
1.
2.
3.
4.
Long memory versus structural breaks: An overview 总被引:1,自引:0,他引:1
We discuss the increasing literature on misspecifying structural breaks or more general trends as long-range dependence. We
consider tests on structural breaks in the long-memory regression model as well as the behaviour of estimators of the memory
parameter when structural breaks or trends are in the data but long memory is not. Methods for distinguishing both of these
phenomena are proposed.
The financial support of Volkswagenstiftung is gratefully acknowledged. 相似文献
5.
An overview of risk-adjusted charts 总被引:2,自引:1,他引:1
O. Grigg V. Farewell 《Journal of the Royal Statistical Society. Series A, (Statistics in Society)》2004,167(3):523-539
Summary. The paper provides an overview of risk-adjusted charts, with examples based on two data sets: the first consisting of outcomes following cardiac surgery and patient factors contributing to the Parsonnet score; the second being age–sex-adjusted death-rates per year under a single general practitioner. Charts presented include the cumulative sum (CUSUM), resetting sequential probability ratio test, the sets method and Shewhart chart. Comparisons between the charts are made. Estimation of the process parameter and two-sided charts are also discussed. The CUSUM is found to be the least efficient, under the average run length (ARL) criterion, of the resetting sequential probability ratio test class of charts, but the ARL criterion is thought not to be sensible for comparisons within that class. An empirical comparison of the sets method and CUSUM, for binary data, shows that the sets method is more efficient when the in-control ARL is small and more efficient for a slightly larger range of in-control ARLs when the change in parameter being tested for is larger. The Shewart p -chart is found to be less efficient than the CUSUM even when the change in parameter being tested for is large. 相似文献
6.
《Journal of Statistical Computation and Simulation》2012,82(1):79-100
The proper combination of parametric and nonparametric regression procedures can improve upon the shortcomings of each when used individually. Considered is the situation where the researcher has an idea of which parametric model should explain the behavior of the data, but this model is not adequate throughout the entire range of the data. An extension of partial linear regression and two other methods of model-robust regression are developed and compared in this context. The model-robust procedures each involve the proportional mixing of a parametric fit to the data and a nonparametric fit to either the data or residuals. The emphasis of this work is on fitting in the small-sample situation, where nonparametric regression alone has well-known inadequacies. Performance is based on bias and variance considerations, and theoretical mean squared error formulas are developed for each procedure. An example is given that uses generated data from an underlying model with defined misspecification to provide graphical comparisons of the fits and to show the theoretical benefits of the model-robust procedures. Simulation results are presented which establish the accuracy of the theoretical formulas and illustrate the potential benefits of the model-robust procedures. Simulations are also used to illustrate the advantageous properties of a data-driven selector developed in this work for choosing the smoothing and mixing parameters. It is seen that the model-robust procedures (the final proposed method, in particular) give much improved fits over the individual parametric and nonparametric fits. 相似文献
7.
An overview of the design of statistical experiments is presented, with special emphasis on response surface designs, block designs, neighbor designs. Applications are mentioned for industrial quality improvement, agricultural experiments, biometry. An outlook towards design optimality concludes the survey. 相似文献
8.
This article consists of a review and some remarks on the scope, common models, methods, their limitations and implications for the analysis of lifetime data. Also a new approach based upon data-transformations analogous to that of Box and Cox (1964) is introduced. The basic methods and theory of the subject are most familiarly and commonly encountered by the statistical community in the context of problems in reliability studies and survival analysis. However, they are also useful in areas of statistical applications such as goodness-of-fit and approximations for sampling distributions and are applicable in such diverse fields of applied research as economics, finance, sociology, meteorology and hydrology. The discussion includes examples from the mainstream statistical, social sciences and business literature. 相似文献
9.
John D. Spurrier 《统计学通讯:理论与方法》2013,42(13):1635-1654
Developments since 1960 in goodness-of-fit tests for the one and two parameter exponential models using both complete and censored samples are reviewed. Special attention is given to both the omnibus or general alternative and to specialized alter-natives such as the class of distributions with increasing failure rates. The use of transformations in developing tests is also discussed. 相似文献
10.
11.
12.
13.
14.
Kenneth H. Pollock 《Journal of applied statistics》2002,29(1-4):85-102
I review the use of auxiliary variables in capture-recapture models for estimation of demographic parameters (e.g. capture probability, population size, survival probability, and recruitment, emigration and immigration numbers). I focus on what has been done in current research and what still needs to be done. Typically in the literature, covariate modelling has made capture and survival probabilities functions of covariates, but there are good reasons also to make other parameters functions of covariates as well. The types of covariates considered include environmental covariates that may vary by occasion but are constant over animals, and individual animal covariates that are usually assumed constant over time. I also discuss the difficulties of using time-dependent individual animal covariates and some possible solutions. Covariates are usually assumed to be measured without error, and that may not be realistic. For closed populations, one approach to modelling heterogeneity in capture probabilities uses observable individual covariates and is thus related to the primary purpose of this paper. The now standard Huggins-Alho approach conditions on the captured animals and then uses a generalized Horvitz-Thompson estimator to estimate population size. This approach has the advantage of simplicity in that one does not have to specify a distribution for the covariates, and the disadvantage is that it does not use the full likelihood to estimate population size. Alternately one could specify a distribution for the covariates and implement a full likelihood approach to inference to estimate the capture function, the covariate probability distribution, and the population size. The general Jolly-Seber open model enables one to estimate capture probability, population sizes, survival rates, and birth numbers. Much of the focus on modelling covariates in program MARK has been for survival and capture probability in the Cormack-Jolly-Seber model and its generalizations (including tag-return models). These models condition on the number of animals marked and released. A related, but distinct, topic is radio telemetry survival modelling that typically uses a modified Kaplan-Meier method and Cox proportional hazards model for auxiliary variables. Recently there has been an emphasis on integration of recruitment in the likelihood, and research on how to implement covariate modelling for recruitment and perhaps population size is needed. The combined open and closed 'robust' design model can also benefit from covariate modelling and some important options have already been implemented into MARK. Many models are usually fitted to one data set. This has necessitated development of model selection criteria based on the AIC (Akaike Information Criteria) and the alternative of averaging over reasonable models. The special problems of estimating over-dispersion when covariates are included in the model and then adjusting for over-dispersion in model selection could benefit from further research. 相似文献
15.
16.
17.
Kenneth H. Pollock Michael J. Conroy William S. Hearn 《Journal of applied statistics》1995,22(5-6):557-566
Ring-recovery methodology has been widely used to estimate survival rates in multi-year ringing studies of wildlife and fish populations (Youngs & Robson, 1975; Brownie et al. , 1985). The Brownie et al. (1985) methodology is often used but its formulation does not account for the fact that rings may be returned in two ways. Sometimes hunters are solicited by a wildlife management officer or scientist and asked if they shot any ringed birds. Alternatively, a hunter may voluntarily report the ring to the Bird Banding Laboratory (US Fish and Wildlife Service, Laurel, MD, USA) as is requested on the ring. Because the Brownie et al. (1985) models only consider reported rings, Conroy (1985) and Conroy et al. (1989) generalized their models to permit solicited rings. Pollock et al. (1991) considered a very similar model for fish tagging models which might be combined with angler surveys. Pollock et al. (1994) showed how to apply their generalized formulation, with some modification to allow for crippling losses, to wildlife ringing studies. Provided an estimate of ring reporting rate is available, separation of hunting and natural mortality estimates is possible which provides important management information. Here we review this material and then discuss possible methods of estimating reporting rate which include: (1) reward ring studies; (2) use of planted rings; (3) hunter surveys; and (4) pre- and post-hunting season ringings. We compare and contrast the four methods in terms of their model assumptions and practicality. We also discuss the estimation of crippling loss using pre- and post-season ringing in combination with a reward ringing study to estimate reporting rate. 相似文献
18.
19.
Summary The need to evaluate the performance of active labour market policies is not questioned any longer. Even though OECD countries
spend significant shares of national resources on these measures, unemployment rates remain high or even increase. We focus
on microeconometric evaluation which has to solve the fundamental evaluation problem and overcome the possible occurrence
of selection bias. When using non-experimental data, different evaluation approaches can be thought of. The aim of this paper
is to review the most relevant estimators, discuss their identifying assumptions and their (dis-)advantages. Thereby we will
present estimators based on some form of exogeneity (selection on observables) as well as estimators where selection might
also occur on unobservable characteristics. Since the possible occurrence of effect heterogeneity has become a major topic
in evaluation research in recent years, we will also assess the ability of each estimator to deal with it. Additionally, we
will also discuss some recent extensions of the static evaluation framework to allow for dynamic treatment evaluation.
The authors thank Stephan L. Thomsen, Christopher Zeiss and one anonymous referee for valuable comments. The usual disclaimer
applies. 相似文献
20.
For fractional factorial (FF) designs, Zhang et al. (2008) introduced a new pattern for assessing regular designs, called aliased effect-number pattern (AENP), and based on the AENP, proposed a general minimum lower order confounding (denoted by GMC for short) criterion for selecting design. In this paper, we first have an overview of the existing optimality criteria of FF designs, and then propose a construction theory for 2n−m GMC designs with 33N/128≤n≤5N/16, where N=2n−m is the run size and n is the number of factors, for all N's and n 's, via the doubling theory and SOS resolution IV designs. The doubling theory is extended with a new approach. By introducing a notion of rechanged (RC) Yates order for the regular saturated design, the construction result turns out to be quite transparent: every GMC 2n−m design simply consists of the last n columns of the saturated design with a specific RC Yates order. This can be very conveniently applied in practice. 相似文献