首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
A bivariate integer-valued moving average (BINMA) model is proposed. The BINMA model allows for both positive and nagative correlation between the counts. This model can be seen as an inverse of the conditional duration model in the sense that short durations in a time interval correspond to a large count and vice versa. The conditional mean, variance, and covariance of the BINMA model are given. Model extensions to include explanatory variables are suggested. Using the BINMA model for AstraZeneca and Ericsson B, it is found that there is positive correlation between the stock transactions series. Empirically, we find support for the use of long-lag bivariate moving average models for the two series.  相似文献   

2.
It is demonstrated that factors needed to conduct tests and form confidence intervals for the ratio of two normal variances can be found using one of the new desk calculators which compute F probabilities.  相似文献   

3.
Statistical inference methods for the Weibull parameters and their functions usually depend on extensive tables, and hence are rather inconvenient for the practical applications. In this paper, we propose a general method for constructing confidence intervals for the Weibull parameters and their functions, which eliminates the need for the extensive tables. The method is applied to obtain confidence intervals for the scale parameter, the mean-time-to-failure, the percentile function, and the reliability function. Monte-Carlo simulation shows that these intervals possess excellent finite sample properties, having coverage probabilities very close to their nominal levels, irrespective of the sample size and the degree of censorship.  相似文献   

4.
This article focuses on the estimation of percentile residual life function with left-truncated and right-censored data. Asymptotic normality and a pointwise confidence interval that does not require estimating the unknown underlying distribution function of the proposed empirical estimator are obtained. Some simulation studies and a real data example are used to illustrate our results.  相似文献   

5.
ABSTRACT

This article considers degradation and failure time models with multiple failure modes which used to study the problem of longevity and aging in survival analysis and reliability. Degradation process is modeled using general nonparametric, nonlinear path models. Semi-parametric models for the intensities of the traumatic failures are used supposing that these intensities depend on degradation level. Semi-parametric estimators of various reliability characteristics are proposed and asymptotic properties of the estimators are obtained. The theoretical results are illustrated using simulated data.  相似文献   

6.
Timely identification of turning points in economic time series is important for planning control actions and achieving profitability. This paper compares sequential methods for detecting peaks and troughs in stock values and deciding the time to trade. Three semi‐parametric methods are considered: double exponential smoothing, time‐varying parameters and prediction error statistics. These methods are widely used in monitoring, forecasting and control, and their common features are recursive computation and exponential weighting of observations. The novelty of this paper is the selection of smoothing and alarm coefficients for maximisation of the gain (the difference in level between subsequent peaks and troughs) of sample data. The methods are compared on applications to leading financial series and with simulation experiments.  相似文献   

7.
Confidence interval (CI) for a standard deviation in a normal distribution, based on pivotal quantity with a Chi-square distribution, is considered. As a measure of CI quality, the ratio of its endpoints is taken. There are given formulas for sample sizes so that this ratio does not exceed a fixed value. Both equally tailed and minimum ratio of endpoint CIs are considered.  相似文献   

8.
In this article, we propose a nonparametric estimator for percentiles of the time-to-failure distribution obtained from a linear degradation model using the kernel density method. The properties of the proposed kernel estimator are investigated and compared with well-known maximum likelihood and ordinary least squares estimators via a simulation technique. The mean squared error and the length of the bootstrap confidence interval are used as the basis criteria of the comparisons. The simulation study shows that the performance of the kernel estimator is acceptable as a general estimator. When the distribution of the data is assumed to be known, the maximum likelihood and ordinary least squares estimators perform better than the kernel estimator, while the kernel estimator is superior when the assumption of our knowledge of the data distribution is violated. A comparison among different estimators is achieved using a real data set.  相似文献   

9.
Several methods have been devised to deal with the problem of temporal disaggregation of economic time series (a) either when related series are available or (b) when only aggregate figures exist. In this article, we propose a statistical model-based approach to temporal disaggregation of economic time series by related series. The proposed approach is performed in two stages. In the first stage, we evaluate a preliminary estimate of the disaggregated series using a regression model for the disaggregated series and related series observed in the same frequency. The preliminary estimate of disaggregated series obtained in the first step is not consistent with aggregate figures. To ensure consistency we propose in the second stage, the use of a modified benchmarking approach based on signal extraction (Hillmer and Trabelsi, 1987 Hillmer , S. C. , Trabelsi , A. ( 1987 ). Benchmarking of economic time series . J. Amer. Statist. Assoc. 82 ( 400 ): 10641071 .[Taylor & Francis Online], [Web of Science ®] [Google Scholar]; Trabelsi and Hillmer, 1990 Trabelsi , A. , Hillmer , S. C. ( 1990 ). Benchmarking time series with reliable benchmarks . Appl. Statist. 39 ( 3 ): 367379 .[Crossref], [Web of Science ®] [Google Scholar]) to adjust the preliminary estimate of disaggregate series. The approach developed here is used for Seasonally Adjusted (SA) and Not Seasonally Adjusted (NSA) data. A comparison with previous temporal disaggregation methods has been done.  相似文献   

10.
《随机性模型》2013,29(2):235-254
We propose a family of extended thinning operators, indexed by a parameter γ in [0, 1), with the boundary case of γ=0 corresponding to the well-known binomial thinning operator. The extended thinning operators can be used to construct a class of continuous-time Markov processes for modeling count time series data. The class of stationary distributions of these processes is called generalized discrete self-decomposable, denoted by DSD (γ). We obtain characterization results for the DSD (γ) class and investigate relationships among the classes for different γ's.  相似文献   

11.
A novel approach based on the concepts of a generalized pivotal quantity (GPQ) is developed to construct confidence intervals for the mediated effect. Thereafter, its performance is compared with six interval estimation approaches in terms of empirical coverage probability and expected length via simulation and two real examples. The results show that the GPQ-based and bootstrap percentile methods outperform other methods when mediated effects exist in small and medium samples. Moreover, the GPQ-based method exhibits a more stable performance in small and non-normal samples. A discussion on how to choose the best interval estimation method for mediated effects is presented.  相似文献   

12.
A wide variety of time series techniques are now used for generating forecasts of economic variables, with each technique attempting to summarize and exploit whatever regularities exist in a given data set. It appears that many researchers arbitrarily choose one of these techniques. The purpose of this article is to provide an example for which the choice of time series technique appears important; merely choosing arbitrarily among available techniques may lead to suboptimal results.  相似文献   

13.
The study focuses on the selection of the order of a general time series process via the conditional density of the latter, a characteristic of which is that it remains constant for every order beyond the true one. Using simulated time series from various nonlinear models we illustrate how this feature can be traced from conditional density estimation. We study whether two statistics derived from the likelihood function can serve as univariate statistics to determine the order of the process. It is found that a weighted version of the log likelihood function has desirable robust properties in detecting the order of the process.  相似文献   

14.
The purpose of this article is to develop a Monte-Carlo simulation algorithm for computing mean time to failure (MTTF) of weighted-k-out-of-n:G and linear consecutive-weighted-k-out-of-n:G systems. Our algorithm is based on the use of appropriately defined stochastic process which represents the total weight of the system at time t. These stochastic processes are explicitly defined and used along with the ordered component lifetimes to simulate MTTF of the systems with weighted components.  相似文献   

15.
In this paper we argue that even if a dynamic relationship can be well described by a deterministic system, retrieving this relationship from an empirical time series has to take into account some, although possibly very small measurement error in the observations. Therefore, measuring the initial conditions for prediction may become much more difficult since one now has a combination of deterministic and stochastic elements. We introduce a partial smoothing estimator for estimating the unobserved initial conditions. We will show that this estimator allows to reduce the effects of measurement error for predictions although the reduction may be small in the presence of strong chaotic dynamics. This will be illustrated using the logistic map.  相似文献   

16.
系统变化的复杂性导致了系统行为数据的不确定性与异构性,面向多源信息的数据集结导致了表征系统变化规律的灰色异构时序数据的产生。对面向区间灰数与离散灰数的双重异构数据序列预测建模方法展开研究,通过对区间灰数均匀分割处理,得到与离散灰数灰元数量相等的次级区间灰数,进而实现了灰色异构数据的"同质化"转换;在此基础上构建了面向异构数据序列的灰色预测模型,并应用该模型实现了大桥沉降量的有效模拟与准确预测。研究成果对拓展灰色预测模型应用范围具有积极意义。  相似文献   

17.
以1950-2007年内蒙古自治区人口总量数据为依据,利用ARIMA(1,1,1)与GM(1,1)模型分别对内蒙古人口总量的时间序列进行了拟合、分析与预测。分析结果表明:两种模型的拟合程度都比较高,但灰色模型的拟合度更高。因此用GM(1,1)模型对内蒙古自治区2010-2012年的人口总量进行了预测。  相似文献   

18.
Time series regression models have been widely studied in the literature by several authors. However, statistical analysis of replicated time series regression models has received little attention. In this paper, we study the application of the quasi-least squares method to estimate the parameters in a replicated time series model with errors that follow an autoregressive process of order p. We also discuss two other established methods for estimating the parameters: maximum likelihood assuming normality and the Yule-Walker method. When the number of repeated measurements is bounded and the number of replications n goes to infinity, the regression and the autocorrelation parameters are consistent and asymptotically normal for all three methods of estimation. Basically, the three methods estimate the regression parameter efficiently and differ in how they estimate the autocorrelation. When p=2, for normal data we use simulations to show that the quasi-least squares estimate of the autocorrelation is undoubtedly better than the Yule-Walker estimate. And the former estimate is as good as the maximum likelihood estimate almost over the entire parameter space.  相似文献   

19.
This article discusses testing hypotheses and confidence regions with correct levels for the mean sojourn time of an M/M/1 queueing system. The uniformly most powerful unbiased tests for three usual hypothesis testing problems are obtained and the corresponding p values are provided. Based on the duality between hypothesis tests and confidence sets, the uniformly most accurate confidence bounds are derived. A confidence interval with correct level is proposed.  相似文献   

20.
不同总体量和样本量时如何计算比例的置信区间   总被引:2,自引:1,他引:2  
在总体或者总体子集不大情况下的抽样调查中,往往不易得出合理的关于比例的区间估计。这一类问题在抽样调查实践中已经严重到非说不可的地步。文章讨论了在样本量不大或者(和)在总体不大时估计比例的置信区间时往往忽略的问题,并给出了在不同情况下如何计算置信区间的方法。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号