首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Abstract

In this article, we propose a new model for binary time series involving an autoregressive moving average structure. The proposed model, which is an extension of the GARMA model, can be used for calculating the forecast probability of an occurrence of an event of interest in cases where these probabilities are dependent on previous observations in the near term. The proposed model is used to analyze a real dataset involving a series that contains only data 0 and 1, indicating the absence or presence of rain in a city located in the central region of São Paulo state, Brazil.  相似文献   

2.
Stylized facts show that average growth rates of U.S. per capita consumption and income differ in recession and expansion periods. Because a linear combination of such series does not have to be a constant mean process, standard cointegration analysis between the variables to examine the permanent income hypothesis may not be valid. To model the changing growth rates in both series, we introduce a multivariate Markov trend model that accounts for different growth rates in consumption and income during expansions and recessions and across variables within both regimes. The deviations from the multivariate Markov trend are modeled by a vector autoregression (VAR) model. Bayes estimates of this model are obtained using Markov chain Monte Carlo methods. The empirical results suggest the existence of a cointegration relation between U.S. per capita disposable income and consumption, after correction for a multivariate Markov trend. This result is also obtained when per capita investment is added to the VAR.  相似文献   

3.
Abstract

Occupancy models are used in statistical ecology to estimate species dispersion. The two components of an occupancy model are the detection and occupancy probabilities, with the main interest being in the occupancy probabilities. We show that for the homogeneous occupancy model there is an orthogonal transformation of the parameters that gives a natural two-stage inference procedure based on a conditional likelihood. We then extend this to a partial likelihood that gives explicit estimators of the model parameters. By allowing the separate modeling of the detection and occupancy probabilities, the extension of the two-stage approach to more general models has the potential to simplify the computational routines used there.  相似文献   

4.
ABSTRACT

In this paper, we investigate the performance of cumulative sum (CUSUM) stopping rules for the online detection of unknown change point in a time homogeneous Markov chain. Under the condition that the post-change transition probabilities are unknown, we proposed two CUSUM type schemes for the detection. The first scheme is based on the maximum likelihood estimates of the post-change transition probabilities. This scheme is limited by its computation burden, which is mitigated by another scheme based on the reference transition probabilities selected from a prior known region. We give the bounds of the mean delay time and the mean time between false alarms to illustrate the effectiveness of the proposed schemes. The results of the simulation also demonstrate the feasibility of the proposed schemes.  相似文献   

5.
Recent evidence indicates that using multiple forward rates sharply predicts future excess returns on U.S. Treasury Bonds, with the R2's being around 30%. The projection coefficients in these regressions exhibit a distinct pattern that relates to the maturity of the forward rate. These dimensions of the data, in conjunction with the transition dynamics of bond yields, offer a serious challenge to term structure models. In this article we show that a regime-shifting term structure model can empirically account for these challenging data features. Alternative models, such as affine specification, fail to account for these important features. We find that regimes in the model are intimately related to bond risk premia and real business cycles.  相似文献   

6.
It is known that the analysis of short panel time series data is very important in many practical problems. This paper calculates the exact moments up to order 4 under the null hypothesis of no serial correlation when there are many independent replications of size 3. We further calculate the tail probabilities under the null hypothesis using the Edgeworth approximation for $n=3, 4$ and $5,$ when the structure of the pdf (probability density function) of the test statistic is in essence different. Finally, we compare the three types of tail probabilities, namely, the Edgeworth approximation, the normal approximation and the exact probabilities through a large scale simulation study.  相似文献   

7.
Summary In the log-linear model for bivariate probability functions the conditional and joint probabilities have a simple form. This property make the log-linear parametrization useful when modeling these probabilities is the focus of the investigation. On the contrary, in the log-linear representation of bivariate probability functions, the marginal probabilities have a complex form. So the log-linear models are not useful when the marginal probabilities are of particular interest. In this paper the previous statements are discussed and a model obtained from the log-linear one by imposing suitable constraints on the marginal probabilities is introduced. This work was supported by a M.U.R.S.T. grant.  相似文献   

8.
ABSTRACT

Background: Instrumental variables (IVs) have become much easier to find in the “Big data era” which has increased the number of applications of the Two-Stage Least Squares model (TSLS). With the increased availability of IVs, the possibility that these IVs are weak has increased. Prior work has suggested a ‘rule of thumb’ that IVs with a first stage F statistic at least ten will avoid a relative bias in point estimates greater than 10%. We investigated whether or not this threshold was also an efficient guarantee of low false rejection rates of the null hypothesis test in TSLS applications with many IVs.

Objective: To test how the ‘rule of thumb’ for weak instruments performs in predicting low false rejection rates in the TSLS model when the number of IVs is large.

Method: We used a Monte Carlo approach to create 28 original data sets for different models with the number of IVs varying from 3 to 30. For each model, we generated 2000 observations for each iteration and conducted 50,000 iterations to reach convergence in rejection rates. The point estimate was set to 0, and probabilities of rejecting this hypothesis were recorded for each model as a measurement of false rejection rate. The relationship between the endogenous variable and IVs was carefully adjusted to let the F statistics for the first stage model equal ten, thus simulating the ‘rule of thumb.’

Results: We found that the false rejection rates (type I errors) increased when the number of IVs in the TSLS model increased while holding the F statistics for the first stage model equal to 10. The false rejection rate exceeds 10% when TLSL has 24 IVs and exceed 15% when TLSL has 30 IVs.

Conclusion: When more instrumental variables were applied in the model, the ‘rule of thumb’ was no longer an efficient guarantee for good performance in hypothesis testing. A more restricted margin for F statistics is recommended to replace the ‘rule of thumb,’ especially when the number of instrumental variables is large.  相似文献   

9.
Abstract

This paper is devoted to application of the singular-spectrum analysis to sequential detection of changes in time series. An algorithm of change-point detection in time series, based on sequential application of the singular-spectrum analysis is developed and studied. The algorithm is applied to different data sets and extensively studied numerically. For specific models, several numerical approximations to the error probabilities and the power function of the algorithm are obtained. Numerical comparisons with other methods are given.  相似文献   

10.
There has been growing interest in the estimation of transition probabilities among stages (Hestbeck et al. , 1991; Brownie et al. , 1993; Schwarz et al. , 1993) in tag-return and capture-recapture models. This has been driven by the increasing interest in meta-population models in ecology and the need for parameter estimates to use in these models. These transition probabilities are composed of survival and movement rates, which can only be estimated separately when an additional assumption is made (Brownie et al. , 1993). Brownie et al. (1993) assumed that movement occurs at the end of the interval between time i and i + 1. We generalize this work to allow different movement patterns in the interval for multiple tag-recovery and capture-recapture experiments. The time of movement is a random variable with a known distribution. The model formulations can be viewed as matrix extensions to the model formulations of single open population capturerecapture and tag-recovery experiments (Jolly, 1965; Seber, 1965; Brownie et al. , 1985). We also present the results of a small simulation study for the tag-return model when movement time follows a beta distribution, and later another simulation study for the capture-recapture model when movement time follows a uniform distribution. The simulation studies use a modified program SURVIV (White, 1983). The Relative Standard Errors (RSEs) of estimates according to high and low movement rates are presented. We show there are strong correlations between movement and survival estimates in the case that the movement rate is high. We also show that estimators of movement rates to different areas and estimators of survival rates in different areas have substantial correlations.  相似文献   

11.
ABSTRACT

A quantile autoregresive model is a useful extension of classical autoregresive models as it can capture the influences of conditioning variables on the location, scale, and shape of the response distribution. However, at the extreme tails, standard quantile autoregression estimator is often unstable due to data sparsity. In this article, assuming quantile autoregresive models, we develop a new estimator for extreme conditional quantiles of time series data based on extreme value theory. We build the connection between the second-order conditions for the autoregression coefficients and for the conditional quantile functions, and establish the asymptotic properties of the proposed estimator. The finite sample performance of the proposed method is illustrated through a simulation study and the analysis of U.S. retail gasoline price.  相似文献   

12.
Abstract

We propose signed compound Poisson integer-valued GARCH processes for the modeling of the difference of count time series data. We investigate the theoretical properties of these processes and we state their ergodicity and stationarity under mild conditions. We discuss the conditional maximum likelihood estimator when the series appearing in the difference are INGARCH with geometric distribution and explore its finite sample properties in a simulation study. Two real data examples illustrate this methodology.  相似文献   

13.
《Econometric Reviews》2013,32(3):229-257
Abstract

We obtain semiparametric efficiency bounds for estimation of a location parameter in a time series model where the innovations are stationary and ergodic conditionally symmetric martingale differences but otherwise possess general dependence and distributions of unknown form. We then describe an iterative estimator that achieves this bound when the conditional density functions of the sample are known. Finally, we develop a “semi-adaptive” estimator that achieves the bound when these densities are unknown by the investigator. This estimator employs nonparametric kernel estimates of the densities. Monte Carlo results are reported.  相似文献   

14.
Abstract

To improve the empirical performance of the Black-Scholes model, many alternative models have been proposed to address leptokurtic feature, volatility smile, and volatility clustering effects of the asset return distributions. However, analytical tractability remains a problem for most alternative models. In this article, we study a class of hidden Markov models including Markov switching models and stochastic volatility models, that can incorporate leptokurtic feature, volatility clustering effects, as well as provide analytical solutions to option pricing. We show that these models can generate long memory phenomena when the transition probabilities depend on the time scale. We also provide an explicit analytic formula for the arbitrage-free price of the European options under these models. The issues of statistical estimation and errors in option pricing are also discussed in the Markov switching models.  相似文献   

15.
《随机性模型》2013,29(4):527-548
Abstract

We consider a multi‐server queuing model with two priority classes that consist of multiple customer types. The customers belonging to one priority class customers are lost if they cannot be served immediately upon arrival. Each customer type has its own Poisson arrival and exponential service rate. We derive an exact method to calculate the steady state probabilities for both preemptive and nonpreemptive priority disciplines. Based on these probabilities, we can derive exact expressions for a wide range of relevant performance characteristics for each customer type, such as the moments of the number of customers in the queue and in the system, the expected postponement time and the blocking probability. We illustrate our method with some numerical examples.  相似文献   

16.
ABSTRACT

In this paper we propose a multivariate approach for forecasting pairwise mortality rates of related populations. The need for joint modelling of mortality rates is analysed using a causality test. We show that for the datasets considered, the inclusion of national mortality information enhances predictions on its subpopulations. The investigated approach links national population mortality to that of a subset population, using an econometric model that captures a long-term relationship between the two mortality dynamics. This model does not focus on the correlation between the mortality rates of the two populations, but rather their long-term behaviour, which suggests that the two times series cannot wander off in opposite directions for long before mean reverting, which is consistent with biological reasoning. The model can additionally capture short-term adjustments in the mortality dynamics of the two populations. An empirical comparison of the forecast of one-year death probabilities for policyholders is performed using both a classical factor-based model and the proposed approach. The robustness of the model is tested on mortality rate data for England and Wales, alongside the Continuous Mortality Investigation assured lives dataset, representing the subpopulation.  相似文献   

17.
Abstract

Optimized group sequential designs proposed in the literature have designs minimizing average sample size with respect to a prior distribution of treatment effect with overall type I and type II error rates well-controlled (i.e., at final stage). The optimized asymmetric group sequential designs that we present here additionally consider constrains on stopping probabilities at stage one: probability of stopping for futility at stage one when no drug effect exists as well as the probability of rejection when the maximum effect size is true at stage one so that accountability of group sequential design is ensured from the first stage throughout.  相似文献   

18.
The threshold diffusion model assumes a piecewise linear drift term and a piecewise smooth diffusion term, which constitutes a rich model for analyzing nonlinear continuous-time processes. We consider the problem of testing for threshold nonlinearity in the drift term. We do this by developing a quasi-likelihood test derived under the working assumption of a constant diffusion term, which circumvents the problem of generally unknown functional form for the diffusion term. The test is first developed for testing for one threshold at which the drift term breaks into two linear functions. We show that under some mild regularity conditions, the asymptotic null distribution of the proposed test statistic is given by the distribution of certain functional of some centered Gaussian process. We develop a computationally efficient method for calibrating the p-value of the test statistic by bootstrapping its asymptotic null distribution. The local power function is also derived, which establishes the consistency of the proposed test. The test is then extended to testing for multiple thresholds. We demonstrate the efficacy of the proposed test by simulations. Using the proposed test, we examine the evidence of nonlinearity in the term structure of a long time series of U.S. interest rates.  相似文献   

19.
Bayesian inference and prediction tasks for Er/M/1 and Er/M/c queues are undertaken. Equilibrium probabilities of the queue size and waiting time distributions are estimated using conditional Monte-Carlo simulation methods. We illustrate that some standard queueing measures do not exist when independent priors are used for the arrival and service rates of a G/M/1 queue.  相似文献   

20.
Abstract

A multivariate version of the sharp Markov inequality is derived, when associated probabilities are extended to segments of the supports of non-negative random variables, where the probabilities take echelon forms. It is shown that when some positive lower bounds of these probabilities are available, the multivariate Markov inequality without the echelon forms is improved. The corresponding results for Chebyshev’s inequality are also obtained.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号