首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 93 毫秒
1.
The paper proposes a formal estimation procedure for parameters of the fractional Poisson process (fPp). Such procedures are needed to make the fPp model usable in applied situations. The basic idea of fPp, motivated by experimental data with long memory is to make the standard Poisson model more flexible by permitting non-exponential, heavy-tailed distributions of interarrival times and different scaling properties. We establish the asymptotic normality of our estimators for the two parameters appearing in our fPp model. This fact permits construction of the corresponding confidence intervals. The properties of the estimators are then tested using simulated data.  相似文献   

2.
Life tables used in life insurance determine the age of death distribution only at integer ages. Therefore, actuaries make fractional age assumptions to interpolate between integer age values when they have to value payments that are not restricted to integer ages. Traditional fractional age assumptions as well as the fractional independence assumption are easy to apply but result in a non-intuitive overall shape of the force of mortality. Other approaches proposed either require expensive optimization procedures or produce many discontinuities. We suggest a new, computationally inexpensive algorithm to select the parameters within the LFM-family introduced by Jones and Mereu (Insur Math Econ 27:261–276, 2000). In contrast to previously suggested methods, our algorithm enforces a monotone force of mortality between integer ages if the mortality rates are monotone and keeps the number of discontinuities small.  相似文献   

3.
Parameter estimation with missing data is a frequently encountered problem in statistics. Imputation is often used to facilitate the parameter estimation by simply applying the complete-sample estimators to the imputed dataset.In this article, we consider the problem of parameter estimation with nonignorable missing data using the approach of parametric fractional imputation proposed by Kim (2011). Using the fractional weights, the E-step of the EM algorithm can be approximated by the weighted mean of the imputed data likelihood where the fractional weights are computed from the current value of the parameter estimates. Calibration fractional imputation is also considered as a way for improving the Monte Carlo approximation in the fractional imputation. Variance estimation is also discussed. Results from two simulation studies are presented to compare the proposed method with the existing methods. A real data example from the Korea Labor and Income Panel Survey (KLIPS) is also presented.  相似文献   

4.
In statistical inference on the drift parameter a in the fractional Brownian motion WHt with the Hurst parameter H ∈ (0, 1) with a constant drift YHt = at + WHt, there is a large number of options how to do it. We may, for example, base this inference on the properties of the standard normal distribution applied to the differences between the observed values of the process at discrete times. Although such methods are very simple, it turns out that more appropriate is to use inverse methods. Such methods can be generalized to non constant drift. For the hypotheses testing about the drift parameter a, it is more proper to standardize the observed process, and to use inverse methods based on the first exit time of the observed process of a pre-specified interval until some given time. These procedures are illustrated, and their times of decision are compared against the direct approach. Other generalizations are possible when the random part is a symmetric stochastic integral of a known, deterministic function with respect to fractional Brownian motion.  相似文献   

5.
Taguchi methods are currently attracting much attention, and certain cavalier interpretations of mean squares in saturated fractional designs have recevied criticism. After two examples illustrating the problem, some procedures are tentatively proposed for improving such analyses, but there is scope for refining these methods, and for research into the general problems of using designs with no independent estimate of experimental error.  相似文献   

6.
This paper sets out to implement the Bayesian paradigm for fractional polynomial models under the assumption of normally distributed error terms. Fractional polynomials widen the class of ordinary polynomials and offer an additive and transportable modelling approach. The methodology is based on a Bayesian linear model with a quasi-default hyper-g prior and combines variable selection with parametric modelling of additive effects. A Markov chain Monte Carlo algorithm for the exploration of the model space is presented. This theoretically well-founded stochastic search constitutes a substantial improvement to ad hoc stepwise procedures for the fitting of fractional polynomial models. The method is applied to a data set on the relationship between ozone levels and meteorological parameters, previously analysed in the literature.  相似文献   

7.
We propose a general class of Markov-switching-ARFIMA (MS-ARFIMA) processes in order to combine strands of long memory and Markov-switching literature. Although the coverage of this class of models is broad, we show that these models can be easily estimated with the Durbin–Levinson–Viterbi algorithm proposed. This algorithm combines the Durbin–Levinson and Viterbi procedures. A Monte Carlo experiment reveals that the finite sample performance of the proposed algorithm for a simple mixture model of Markov-switching mean and ARFIMA(1, d, 1) process is satisfactory. We apply the MS-ARFIMA models to the US real interest rates and the Nile river level data, respectively. The results are all highly consistent with the conjectures made or empirical results found in the literature. Particularly, we confirm the conjecture in Beran and Terrin [J. Beran and N. Terrin, Testing for a change of the long-memory parameter. Biometrika 83 (1996), pp. 627–638.] that the observations 1 to about 100 of the Nile river data seem to be more independent than the subsequent observations, and the value of differencing parameter is lower for the first 100 observations than for the subsequent data.  相似文献   

8.
The D‐optimal minimax criterion is proposed to construct fractional factorial designs. The resulting designs are very efficient, and robust against misspecification of the effects in the linear model. The criterion was first proposed by Wilmut & Zhou (2011); their work is limited to two‐level factorial designs, however. In this paper we extend this criterion to designs with factors having any levels (including mixed levels) and explore several important properties of this criterion. Theoretical results are obtained for construction of fractional factorial designs in general. This minimax criterion is not only scale invariant, but also invariant under level permutations. Moreover, it can be applied to any run size. This is an advantage over some other existing criteria. The Canadian Journal of Statistics 41: 325–340; 2013 © 2013 Statistical Society of Canada  相似文献   

9.
This article deals with the efficiency of fractional integration parameter estimators. This study was based on Monte Carlo experiments involving simulated stochastic processes with integration orders in the range ]-1,1[. The evaluated estimation methods were classified into two groups: heuristics and semiparametric/maximum likelihood (ML). The study revealed that the comparative efficiency of the estimators, measured by the lesser mean squared error, depends on the stationary/non-stationary and persistency/anti-persistency conditions of the series. The ML estimator was shown to be superior for stationary persistent processes; the wavelet spectrum-based estimators were better for non-stationary mean reversible and invertible anti-persistent processes; the weighted periodogram-based estimator was shown to be superior for non-invertible anti-persistent processes.  相似文献   

10.
Group sequential tests have been effective tools in monitoring long term clinical trials. There have been several popular discrete sequential boundaries proposed for modeling interim analysis of clinical trials under the assumption of Brownian motion for the stochastic processes generated from test statistics. In this paper, we study the five sequential boundaries in Lan and DeMets (Biometrika 70:659–663, 1983) under the fractional Brownian motion. The fractional Brownian includes the classic Brownian motion as a special case. An example from a real data set is used to illustrate the applications of the boundaries.  相似文献   

11.
This paper deals with the problem of estimating all the unknown parameters of geometric fractional Brownian processes from discrete observations. The estimation procedure is built upon the marriage of the quadratic variation and the maximum likelihood approach. The asymptotic properties of the estimators are provided. Moveover, we compare our derived method with the approach proposed by Misiran et al. [Fractional Black-Scholes models: complete MLE with application to fractional option pricing. In International conference on optimization and control; Guiyang, China; 2010. p. 573–586.], namely the complete maximum likelihood estimation. Simulation studies confirm theoretical findings and illustrate that our methodology is efficient and reliable. To show how to apply our approach in realistic contexts, an empirical study of Chinese financial market is also presented.  相似文献   

12.
We propose a specific general Markov-regime switching estimation both in the long memory parameter d and the mean of a time series. We employ Viterbi algorithm that combines the Viterbi procedures in two state Markov-switching parameter estimation. It is well-known that existence of mean break and long memory in time series can be easily confused with each other in most cases. Thus, we aim at observing the deviation and interaction of mean and d estimates for different cases. A Monte Carlo experiment reveals that the finite sample performance of the proposed algorithm for a simple mixture model of Markov-switching mean and d changes with respect to the fractional integrating parameters and the mean values for the two regimes.  相似文献   

13.
We introduce a bootstrap procedure for high‐frequency statistics of Brownian semistationary processes. More specifically, we focus on a hypothesis test on the roughness of sample paths of Brownian semistationary processes, which uses an estimator based on a ratio of realized power variations. Our new resampling method, the local fractional bootstrap, relies on simulating an auxiliary fractional Brownian motion that mimics the fine properties of high‐frequency differences of the Brownian semistationary process under the null hypothesis. We prove the first‐order validity of the bootstrap method, and in simulations, we observe that the bootstrap‐based hypothesis test provides considerable finite‐sample improvements over an existing test that is based on a central limit theorem. This is important when studying the roughness properties of time series data. We illustrate this by applying the bootstrap method to two empirical data sets: We assess the roughness of a time series of high‐frequency asset prices and we test the validity of Kolmogorov's scaling law in atmospheric turbulence data.  相似文献   

14.
The area under the receiver operating characteristic (ROC) curve (AUC) is one of the commonly used measure to evaluate or compare the predictive ability of markers to the disease status. Motivated by an angiographic coronary artery disease (CAD) study, our objective is mainly to evaluate and compare the performance of several baseline plasma levels in the prediction of CAD-related vital status over time. Based on censored survival data, the non-parametric estimators are proposed for the time-dependent AUC. The limiting Gaussian processes of the estimators and the estimated asymptotic variance–covariance functions enable us to further construct confidence bands and develop testing procedures. Applications and finite sample properties of the proposed estimation methods and inference procedures are demonstrated through the CAD-related death data from the British Columbia Vital Statistics Agency and Monte Carlo simulations.  相似文献   

15.
Combinatorial extension and composition methods have been extensively used in the construction of block designs. One of the composition methods, namely the direct product or Kronecker product method was utilized by Chakravarti [1956] to produce certain types of fractional factorial designs. The present paper shows how the direct sum operation can be utilized in obtaining from initial fractional factorial designs for two separate symmetrical factorials a fractional factorial design for the corresponding asymmetrical factorial. Specifically, we provide some results which are useful in the construction of non-singular fractional factorial designs via the direct sum composition method. In addition a modified direct sum method is discussed and the consequences of imposing orthogonality are explored.  相似文献   

16.
To be useful to clinicians, prognostic and diagnostic indices must be derived from accurate models developed by using appropriate data sets. We show that fractional polynomials, which extend ordinary polynomials by including non-positive and fractional powers, may be used as the basis of such models. We describe how to fit fractional polynomials in several continuous covariates simultaneously, and we propose ways of ensuring that the resulting models are parsimonious and consistent with basic medical knowledge. The methods are applied to two breast cancer data sets, one from a prognostic factors study in patients with positive lymph nodes and the other from a study to diagnose malignant or benign tumours by using colour Doppler blood flow mapping. We investigate the problems of biased parameter estimates in the final model and overfitting using cross-validation calibration to estimate shrinkage factors. We adopt bootstrap resampling to assess model stability. We compare our new approach with conventional modelling methods which apply stepwise variables selection to categorized covariates. We conclude that fractional polynomial methodology can be very successful in generating simple and appropriate models.  相似文献   

17.
Interval-censored data arise when a failure time say, T cannot be observed directly but can only be determined to lie in an interval obtained from a series of inspection times. The frequentist approach for analysing interval-censored data has been developed for some time now. It is very common due to unavailability of software in the field of biological, medical and reliability studies to simplify the interval censoring structure of the data into that of a more standard right censoring situation by imputing the midpoints of the censoring intervals. In this research paper, we apply the Bayesian approach by employing Lindley's 1980, and Tierney and Kadane 1986 numerical approximation procedures when the survival data under consideration are interval-censored. The Bayesian approach to interval-censored data has barely been discussed in literature. The essence of this study is to explore and promote the Bayesian methods when the survival data been analysed are is interval-censored. We have considered only a parametric approach by assuming that the survival data follow a loglogistic distribution model. We illustrate the proposed methods with two real data sets. A simulation study is also carried out to compare the performances of the methods.  相似文献   

18.
This paper presents some innovative methods for modeling discrete scale invariant (DSI) processes and evaluation of corresponding parameters. For the case where the absolute values of the increments of DSI processes are in general increasing, we consider some moving sample variance of the increments and present some heuristic algorithm to characterize successive scale intervals. This enables us to estimate scale parameter of such DSI processes. To present some superior structure for the modeling of DSI processes, we consider the possibility that the variations inside the prescribed scale intervals show some further self-similar behavior. Such consideration enables us to provide more efficient estimators for Hurst parameters. We also present two competitive estimation methods for the Hurst parameters of self-similar processes with stationary increments and prove their efficiency. Using simulated samples of some simple fractional Brownian motion, we show that our estimators of Hurst parameter are more efficient as compared with the celebrated methods of convex rearrangement and quadratic variation. Finally we apply the proposed methods to evaluate DSI behavior of the S&P500 indices in some period.  相似文献   

19.
This paper examines the finite-sample behavior of the Lagrange Multiplier (LM) test for fractional integration proposed by Breitung and Hassler (J. Econom. 110:167–185, 2002). We find by extensive Monte Carlo simulations that size distortions can be quite large in small samples. These are caused by a finite-sample bias towards the alternative. Analytic expressions for this bias are derived, based on which the test can easily be corrected.  相似文献   

20.
ABSTRACT

Life tables used in life insurance are often calibrated to show the survival function of the age of death distribution at exact integer ages. Actuaries usually make fractional age assumptions (FAAs) when having to value payments that are not restricted to integer ages. Traditional FAAs have the advantage of simplicity but cannot guarantee to capture precisely the real trends of the survival functions and sometimes even result in a non intuitive overall shape of the force of mortality. In fact, an FAA is an interpolation between integer age values which are accepted as given. In this article, we introduce Kriging model, which is widely used as a metamodel for expensive simulations, to fit the survival function at integer ages, and furthermore use the precisely constructed survival function to build the force of mortality and the life expectancy. The experimental results obtained from a simulated life table (Makehamized life table) and two “real” life tables (the Chinese and US life tables) show that these actuarial quantities (survival function, force of mortality, and life expectancy) presented by Kriging model are much more accurate than those presented by commonly-used FAAs: the uniform distribution of death (UDD) assumption, the constant force assumption, and the Balducci assumption.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号