首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The present work proposes a new integer valued autoregressive model with Poisson marginal distribution based on the mixing Pegram and dependent Bernoulli thinning operators. Properties of the model are discussed. We consider several methods for estimating the unknown parameters of the model. Also, the classical and Bayesian approaches are used for forecasting. Simulations are performed for the performance of these estimators and forecasting methods. Finally, the analysis of two real data has been presented for illustrative purposes.  相似文献   

2.
Most of the long memory estimators for stationary fractionally integrated time series models are known to experience non‐negligible bias in small and finite samples. Simple moment estimators are also vulnerable to such bias, but can easily be corrected. In this article, the authors propose bias reduction methods for a lag‐one sample autocorrelation‐based moment estimator. In order to reduce the bias of the moment estimator, the authors explicitly obtain the exact bias of lag‐one sample autocorrelation up to the order n−1. An example where the exact first‐order bias can be noticeably more accurate than its asymptotic counterpart, even for large samples, is presented. The authors show via a simulation study that the proposed methods are promising and effective in reducing the bias of the moment estimator with minimal variance inflation. The proposed methods are applied to the northern hemisphere data. The Canadian Journal of Statistics 37: 476–493; 2009 © 2009 Statistical Society of Canada  相似文献   

3.
We consider the first-order Poisson autoregressive model proposed by McKenzie [Some simple models for discrete variate time series. Water Resour Bull. 1985;21:645–650] and Al-Osh and Alzaid [First-order integer valued autoregressive (INAR(1)) process. J Time Ser Anal. 1987;8:261–275], which may be suitable in situations where the time series data are non-negative and integer valued. We derive the second-order bias of the squared difference estimator [Weiß. Process capability analysis for serially dependent processes of Poisson counts. J Stat Comput Simul. 2012;82:383–404] for one of the parameters and show that this bias can be used to define a bias-reduced estimator. The behaviour of a modified conditional least-squares estimator is also studied. Furthermore, we access the asymptotic properties of the estimators here discussed. We present numerical evidence, based upon Monte Carlo simulation studies, showing that the here proposed bias-adjusted estimator outperforms the other estimators in small samples. We also present an application to a real data set.  相似文献   

4.
In this paper we work with multivariate time series that follow a Factor Model. In particular, we consider the setting where factors are dominated by highly persistent AutoRegressive (AR) processes and samples that are rather small. Therefore, the factors' AR models are estimated using small sample bias correction techniques. A Monte Carlo study reveals that bias-correcting the AR coefficients of the factors allows to obtain better results in terms of prediction interval coverage. As expected, the simulation shows that bias-correction is more successful for smaller samples. We present the results assuming the AR order and number of factors are known as well as unknown. We also study the advantages of this technique for a set of Industrial Production Indexes of several European countries.  相似文献   

5.
Autoregressive models are widely employed for predictions and other inferences in many scientific fields. While the determination of their order is in general a difficult and critical step, this task becomes more complicated and crucial when the time series under investigation is realization of a stochastic process characterized by sparsity. In this paper we present a method for order determination of a stationary AR model with a sparse structure, given a set of observations, based upon a bootstrapped version of MAICE procedure [Akaike H. Prediction and entropy. Springer; 1998], in conjunction with a LASSO-type constraining procedure for lag suppression of insignificant lags. Empirical results will be obtained via Monte Carlo simulations. The quality of our method is assessed by comparison with the commonly adopted cross-validation approach and the non bootstrap counterpart of the presented procedure.  相似文献   

6.
7.
We examine bias corrections which have been proposed for the fixed effects panel probit model with exogenous regressors, using several different data generating processes to evaluate the performance of the estimators in different situations. We find a best estimator across all cases for coefficient estimates, but when the marginal effects are the quantity of interest no analytical correction is able to outperform the uncorrected maximum-likelihood estimator.  相似文献   

8.
We propose testing procedures for the hypothesis that a given set of discrete observations may be formulated as a particular time series of counts with a specific conditional law. The new test statistics incorporate the empirical probability-generating function computed from the observations. Special emphasis is given to the popular models of integer autoregression and Poisson autoregression. The asymptotic properties of the proposed test statistics are studied under the null hypothesis as well as under alternatives. A Monte Carlo power study on bootstrap versions of the new methods is included as well as real-data examples.  相似文献   

9.
10.
This paper discusses method for constructing the prediction intervals for time series model with trend using the sieve bootstrap procedure. Gasser–Müller type of kernel estimator is used for trend estimation and prediction. The boundary modification of the kernel is applied to control the edge effect and to construct the predictor of a trend.  相似文献   

11.
Maximum likelihood estimates (MLEs) for logistic regression coefficients are known to be biased in finite samples and consequently may produce misleading inferences. Bias adjusted estimates can be calculated using the first-order asymptotic bias derived from a Taylor series expansion of the log likelihood. Jackknifing can also be used to obtain bias corrected estimates, but the approach is computationally intensive, requiring an additional series of iterations (steps) for each observation in the dataset.Although the one-step jackknife has been shown to be useful in logistic regression diagnostics and i the estimation of classification error rates, it does not effectively reduce bias. The two-step jackknife, however, can reduce computation in moderate-sized samples, provide estimates of dispersion and classification error, and appears to be effective in bias reduction. Another alternative, a two-step closed-form approximation, is found to be similar to the Taylo series method in certain circumstances. Monte Carlo simulations indicate that all the procedures, but particularly the multi-step jackknife, may tend to over-correct in very small samples. Comparison of the various bias correction proceduresin an example from the medical literature illustrates that bias correction can have a considerable impact on inference  相似文献   

12.
In this paper, we propose several tests for monotonic trend based on the Brillinger's test statistic (1989, Biometrika, 76, 23–30). When there are highly correlated residuals or short record lengths, Brillinger's test procedure tends to have significance level much higher than the nominal level. It is found that this could be related to the discrepancy between the empirical distribution of the test statistic and the asymptotic normal distribution. Hence, in this paper, we propose three bootstrap-based procedures based on the Brillinger's test statistic to test for monotonic trend. The performance of the proposed test procedures is evaluated through an extensive Monte Carlo simulation study, and is compared to other trend test procedures in the literature. It is shown that the proposed bootstrap-based Brillinger test procedures can well control the significance levels and provide satisfactory power performance in testing the monotonic trend under different scenarios.  相似文献   

13.
This article examines a semiparametric test for checking the constancy of serial dependence via copula models for Markov time series. A semiparametric score test is proposed for testing the constancy of the copula parameter against stochastically varying copula parameter. The asymptotic null distribution of the test is established. A semiparametric bootstrap procedure is employed for the estimation of the variance of the proposed score test. Illustrations are given based on simulated series and historic interest rate data.  相似文献   

14.
The importance of the dispersion parameter in counts occurring in toxicology, biology, clinical medicine, epidemiology, and other similar studies is well known. A couple of procedures for the construction of confidence intervals (CIs) of the dispersion parameter have been investigated, but little attention has been paid to the accuracy of its CIs. In this paper, we introduce the profile likelihood (PL) approach and the hybrid profile variance (HPV) approach for constructing the CIs of the dispersion parameter for counts based on the negative binomial model. The non-parametric bootstrap (NPB) approach based on the maximum likelihood (ML) estimates of the dispersion parameter is also considered. We then compare our proposed approaches with an asymptotic approach based on the ML and the restricted ML (REML) estimates of the dispersion parameter as well as the parametric bootstrap (PB) approach based on the ML estimates of the dispersion parameter. As assessed by Monte Carlo simulations, the PL approach has the best small-sample performance, followed by the REML, HPV, NPB, and PB approaches. Three examples to biological count data are presented.  相似文献   

15.
In drug development, treatments are most often selected at Phase 2 for further development when an initial trial of a new treatment produces a result that is considered positive. This selection due to a positive result means, however, that an estimator of the treatment effect, which does not take account of the selection is likely to over‐estimate the true treatment effect (ie, will be biased). This bias can be large and researchers may face a disappointingly lower estimated treatment effect in further trials. In this paper, we review a number of methods that have been proposed to correct for this bias and introduce three new methods. We present results from applying the various methods to two examples and consider extensions of the examples. We assess the methods with respect to bias of estimation of the treatment effect and compare the probabilities that a bias‐corrected treatment effect estimate will exceed a decision threshold. Following previous work, we also compare average power for the situation where a Phase 3 trial is launched given that the bias‐corrected observed Phase 2 treatment effect exceeds a launch threshold. Finally, we discuss our findings and potential application of the bias correction methods.  相似文献   

16.
In this paper, we consider James–Stein shrinkage and pretest estimation methods for time series following generalized linear models when it is conjectured that some of the regression parameters may be restricted to a subspace. Efficient estimation strategies are developed when there are many covariates in the model and some of them are not statistically significant. Statistical properties of the pretest and shrinkage estimation methods including asymptotic distributional bias and risk are developed. We investigate the relative performances of shrinkage and pretest estimators with respect to the unrestricted maximum partial likelihood estimator (MPLE). We show that the shrinkage estimators have a lower relative mean squared error as compared to the unrestricted MPLE when the number of significant covariates exceeds two. Monte Carlo simulation experiments were conducted for different combinations of inactive covariates and the performance of each estimator was evaluated in terms of its mean squared error. The practical benefits of the proposed methods are illustrated using two real data sets.  相似文献   

17.
Statistical inference of high-dimensional time series data is of increasing interest in various fields such as social sciences and biology. In this article, we consider the problem of testing the equality of high-dimensional mean vectors in the approximate factor model, which allows for time series dependence among distinct observations and more flexible dependence within observations. We propose a data-adaptive test based on the factor-adjusted data rather than on the directly observed data. By combining the tests with different norms, the proposed test adapts to various alternative scenarios and thus overcomes the shortcomings of the tests based either on L2-norm or L-norm. Multiplier bootstrap method is utilized to approximate the true underlying distribution of the proposed test statistics. Theoretical analysis shows that the proposed test enjoys desirable properties. Besides, we conduct thorough numerical study to compare the empirical performance of the proposed test with some state-of-the-art tests. A real stock market data set is analyzed to show the empirical usefulness of the proposed test.  相似文献   

18.
We investigate the behavior of the well-known Hylleberg, Engle, Granger and Yoo (HEGY) regression-based seasonal unit root tests in cases where the driving shocks can display periodic nonstationary volatility and conditional heteroskedasticity. Our set up allows for periodic heteroskedasticity, nonstationary volatility and (seasonal) generalized autoregressive-conditional heteroskedasticity as special cases. We show that the limiting null distributions of the HEGY tests depend, in general, on nuisance parameters which derive from the underlying volatility process. Monte Carlo simulations show that the standard HEGY tests can be substantially oversized in the presence of such effects. As a consequence, we propose wild bootstrap implementations of the HEGY tests. Two possible wild bootstrap resampling schemes are discussed, both of which are shown to deliver asymptotically pivotal inference under our general conditions on the shocks. Simulation evidence is presented which suggests that our proposed bootstrap tests perform well in practice, largely correcting the size problems seen with the standard HEGY tests even under extreme patterns of heteroskedasticity, yet not losing finite sample relative to the standard HEGY tests.  相似文献   

19.
ABSTRACT

Every large census operation should undergo evaluation programs to find the sources and extent of inherent coverage errors. In this article, we briefly discuss the statistical methodology to estimate the omission rate in Indian census using dual-system estimation (DSE) technique. We have explicitly studied the correlation bias factor involved in the estimate, its extent, and consequences. A new potential source of bias in the estimate is identified and discussed. During the survey, more efficient enumerators compared to the census operations are appointed, and this fact may inflate the dependency between two lists and lead to a significant bias. Some examples are given to demonstrate this argument in various plausible situations. We have suggested one simple and flexible approach which can control this bias. Our proposed estimator can efficiently overcome the potential bias by achieving the desired degree of accuracy (almost unbiased) with relatively higher efficiency. Overall improvements in the results are explored through simulation study on different populations.  相似文献   

20.
This study considers a goodness-of-fit test for location-scale time series models with heteroscedasticity, including a broad class of generalized autoregressive conditional heteroscedastic-type models. In financial time series analysis, the correct identification of model innovations is crucial for further inferences in diverse applications such as risk management analysis. To implement a goodness-of-fit test, we employ the residual-based entropy test generated from the residual empirical process. Since this test often shows size distortions and is affected by parameter estimation, its bootstrap version is considered. It is shown that the bootstrap entropy test is weakly consistent, and thereby its usage is justified. A simulation study and data analysis are conducted by way of an illustration.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号