共查询到20条相似文献,搜索用时 46 毫秒
1.
There is an emerging consensus in empirical finance that realized volatility series typically display long range dependence with a memory parameter (d) around 0.4 (Andersen et al., 2001; Martens et al., 2004). The present article provides some illustrative analysis of how long memory may arise from the accumulative process underlying realized volatility. The article also uses results in Lieberman and Phillips (2004, 2005) to refine statistical inference about d by higher order theory. Standard asymptotic theory has an O(n ?1/2) error rate for error rejection probabilities, and the theory used here refines the approximation to an error rate of o(n ?1/2). The new formula is independent of unknown parameters, is simple to calculate and user-friendly. The method is applied to test whether the reported long memory parameter estimates of Andersen et al. (2001) and Martens et al. (2004) differ significantly from the lower boundary (d = 0.5) of nonstationary long memory, and generally confirms earlier findings. 相似文献
2.
Eduardo Rossi 《Econometric Reviews》2014,33(7):785-814
A stylized fact is that realized variance has long memory. We show that, when the instantaneous volatility is a long memory process of order d, the integrated variance is characterized by the same long-range dependence. We prove that the spectral density of realized variance is given by the sum of the spectral density of the integrated variance plus that of a measurement error, due to the sparse sampling and market microstructure noise. Hence, the realized volatility has the same degree of long memory as the integrated variance. The additional term in the spectral density induces a finite-sample bias in the semiparametric estimates of the long memory. A Monte Carlo simulation provides evidence that the corrected local Whittle estimator of Hurvich et al. (2005) is much less biased than the standard local Whittle estimator and the empirical application shows that it is robust to the choice of the sampling frequency used to compute the realized variance. Finally, the empirical results suggest that the volatility series are more likely to be generated by a nonstationary fractional process. 相似文献
3.
In this article, we consider two different shared frailty regression models under the assumption of Gompertz as baseline distribution. Mostly assumption of gamma distribution is considered for frailty distribution. To compare the results with gamma frailty model, we consider the inverse Gaussian shared frailty model also. We compare these two models to a real life bivariate survival data set of acute leukemia remission times (Freireich et al., 1963). Analysis is performed using Markov Chain Monte Carlo methods. Model comparison is made using Bayesian model selection criterion and a well-fitted model is suggested for the acute leukemia data. 相似文献
4.
This article presents a new class of realized stochastic volatility model based on realized volatilities and returns jointly. We generalize the traditionally used logarithm transformation of realized volatility to the Box–Cox transformation, a more flexible parametric family of transformations. A two-step maximum likelihood estimation procedure is introduced to estimate this model on the basis of Koopman and Scharth (2013). Simulation results show that the two-step estimator performs well, and the misspecified log transformation may lead to inaccurate parameter estimation and certain excessive skewness and kurtosis. Finally, an empirical investigation on realized volatility measures and daily returns is carried out for several stock indices. 相似文献
5.
Sanaullah et al. (2014) have suggested generalized exponential chain ratio estimators under stratified two-phase sampling scheme for estimating the finite population mean. However, the bias and mean square error (MSE) expressions presented in that work need some corrections, and consequently the study based on efficiency comparison also requires corrections. In this article, we revisit Sanaullah et al. (2014) estimator and provide the correct bias and MSE expressions of their estimator. We also propose an estimator which is more efficient than several competing estimators including the classes of estimators in Sanaullah et al. (2014). Three real datasets are used for efficiency comparisons. 相似文献
6.
Viswanathan Ramakrishnan 《统计学通讯:模拟与计算》2013,42(3):405-418
In many genetic analyses of dichotomous twin data, odds ratios have been used to test hypotheses on heritability and shared common environment effects of a given disease (Lichtenstein et al., 2000; Ahlbom et al., 1997; Ramakrishnan et al., 1992, 4). However, estimates of these two effects have not been dealt with in the literature. In epidemiology, the attributable fraction (AF), a function of the odds ratio and the prevalence of the risk factor has been used to describe the contribution of a risk factor to a disease in a given population (Leviton, 1973). In this article, we adapt the AF to quantify the heritability and the shared common environment. Twin data on cancer, gallstone disease and phobia are used to illustrate the applicability of the AF estimate as a measure of heritability. 相似文献
7.
Junyong Park Jayson D. Wilbur Jayanta K. Ghosh Cindy H. Nakatsu Corinne Ackerman 《统计学通讯:模拟与计算》2013,42(4):855-869
We adopt boosting for classification and selection of high-dimensional binary variables for which classical methods based on normality and non singular sample dispersion are inapplicable. Boosting seems particularly well suited for binary variables. We present three methods of which two combine boosting with the relatively classical variable selection methods developed in Wilbur et al. (2002). Our primary interest is variable selection in classification with small misclassification error being used as validation of proposed method for variable selection. Two of the new methods perform uniformly better than Wilbur et al. (2002) in one set of simulated and three real life examples. 相似文献
8.
The Significance Analysis of Microarrays (SAM; Tusher et al., 2001) method is widely used in analyzing gene expression data while controlling the FDR by using resampling-based procedure in the microarray setting. One of the main components of the SAM procedure is the adjustment of the test statistic. The introduction of the fudge factor to the test statistic aims at deflating the large value of test statistics due to the small standard error of gene-expression. Lin et al. (2008) pointed out that the fudge factor does not effectively improve the power and the control of the FDR as compared to the SAM procedure without the fudge factor in the presence of small variance genes. Motivated by the simulation results presented in Lin et al. (2008), in this article, we extend our study to compare several methods for choosing the fudge factor in the modified t-type test statistics and use simulation studies to investigate the power and the control of the FDR of the considered methods. 相似文献
9.
A Bottom-Up Dynamic Model of Portfolio Credit Risk with Stochastic Intensities and Random Recoveries
Tomasz R. Bielecki Areski Cousin Stéphane Crépey Alexander Herbertsson 《统计学通讯:理论与方法》2014,43(7):1362-1389
In Bielecki et al. (2014a), the authors introduced a Markov copula model of portfolio credit risk where pricing and hedging can be done in a sound theoretical and practical way. Further theoretical backgrounds and practical details are developed in Bielecki et al. (2014b,c) where numerical illustrations assumed deterministic intensities and constant recoveries. In the present paper, we show how to incorporate stochastic default intensities and random recoveries in the bottom-up modeling framework of Bielecki et al. (2014a) while preserving numerical tractability. These two features are of primary importance for applications like CVA computations on credit derivatives (Assefa et al., 2011; Bielecki et al., 2012), as CVA is sensitive to the stochastic nature of credit spreads and random recoveries allow to achieve satisfactory calibration even for “badly behaved” data sets. This article is thus a complement to Bielecki et al. (2014a), Bielecki et al. (2014b) and Bielecki et al. (2014c). 相似文献
10.
Shesh N. Rai Jianmin Pan Xiaobin Yuan Jianguo Sun Melissa M. Hudson Deo K. Srivastava 《统计学通讯:理论与方法》2013,42(17):3117-3133
New drug discovery in the pediatrics has dramatically improved survival, but with long- term adverse events. This motivates the examination of adverse outcomes such as long-term toxicity in a phase IV trial. An ideal approach to monitor long-term toxicity is to systematically follow the survivors, which is generally not feasible. Instead, cross-sectional surveys are conducted in Hudson et al. (2007), with one of the objectives to estimate the cumulative incidence rates along with specific interest in fixed-term (5 or 10 year) rates. We present inference procedures based on current status data to our motivating example with very interesting findings. 相似文献
11.
Tony Vangeneugden Geert Verbeke Clarice G.B. Demétrio 《Journal of applied statistics》2011,38(2):215-232
Vangeneugden et al. [15] derived approximate correlation functions for longitudinal sequences of general data type, Gaussian and non-Gaussian, based on generalized linear mixed-effects models (GLMM). Their focus was on binary sequences, as well as on a combination of binary and Gaussian sequences. Here, we focus on the specific case of repeated count data, important in two respects. First, we employ the model proposed by Molenberghs et al. [13], which generalizes at the same time the Poisson-normal GLMM and the conventional overdispersion models, in particular the negative-binomial model. The model flexibly accommodates data hierarchies, intra-sequence correlation, and overdispersion. Second, means, variances, and joint probabilities can be expressed in closed form, allowing for exact intra-sequence correlation expressions. Next to the general situation, some important special cases such as exchangeable clustered outcomes are considered, producing insightful expressions. The closed-form expressions are contrasted with the generic approximate expressions of Vangeneugden et al. [15]. Data from an epileptic-seizures trial are analyzed and correlation functions derived. It is shown that the proposed extension strongly outperforms the classical GLMM. 相似文献
12.
This article suggests an improved class of estimators under the general framework of two-phase sampling scheme in presence of two auxiliary variables. This class includes a large number of estimators (Chand, 1975; Kiregyera, 1980, 3; Mukharjee et al., 1987) and also the class of estimators suggested by Sahoo et al. (1993). 相似文献
13.
Housila P. Singh 《统计学通讯:理论与方法》2013,42(23):4222-4238
This article considers some classes of estimators of the population median of the study variable using information on an auxiliary variable with their properties under large sample approximation. Asymptotic optimum estimator (AOE) in each class of estimators has been investigated along with the approximate mean square error formulae. It has been shown that the proposed classes of estimators are better than these considered by Gross (1980), Kuk and Mak (1989), Singh et al. (2003a), and Al and Cingi (2009). An empirical study is carried out to judge the merits of the suggested class of estimators over other existing estimators. 相似文献
14.
We propose a new ratio type estimator for estimating the finite population mean using two auxiliary variables in stratified two-phase sampling. Expressions for bias and mean squared error of the proposed estimator are derived up to the first order of approximation. The proposed estimator is more efficient than the usual stratified sample mean estimator, traditional stratified ratio estimator and some other stratified estimators including Bahl and Tuteja (1991), Chami et al. (2012), Chand (1975), Choudhury and Singh (2012), Hamad et al. (2013), Vishwakarma and Gangele (2014), Sanaullah et al. (2014), and Chanu and Singh (2014). 相似文献
15.
The extended exponential distribution due to Nadarajah and Haghighi (2011) is an alternative to and always provides better fits than the gamma, Weibull, and the exponentiated exponential distributions whenever the data contain zero values. We establish recurrence relations for the single and product moments of order statistics from the extended exponential distribution. These recurrence relations enable computation of the means, variances, and covariances of all order statistics for all sample sizes in a simple and efficient manner. By using these relations, we tabulate the means, variances, and covariances of order statistics and derive best linear unbiased estimates of the extended exponential distribution. Finally, a data application is provided. 相似文献
16.
This paper is based on the application of a Bayesian model to a clinical trial study to determine a more effective treatment to lower mortality rates and consequently to increase survival times among patients with lung cancer. In this study, Qian et al. [13] strived to determine if a Weibull survival model can be used to decide whether to stop a clinical trial. The traditional Gibbs sampler was used to estimate the model parameters. This paper proposes to use the independent steady-state Gibbs sampling (ISSGS) approach, introduced by Dunbar et al. [3], to improve the original Gibbs sampler in multidimensional problems. It is demonstrated that ISSGS provides accuracy with unbiased estimation and improves the performance and convergence of the Gibbs sampler in this application. 相似文献
17.
For the first time, we provide a matrix formula for second-order covariances of maximum likelihood estimates in heteroskedastic generalized linear models, thus generalizing the results of Cordeiro (2004) and Cordeiro et al. (2006) related to the generalized linear models with known and unknown dispersion parameter, respectively. The covariance matrix formula does not involve cumulants of log-likelihood derivatives and can be easily obtained using simple matrix operations. We apply our main result to a simple model. Some simulations show that the second-order covariances can be quite pronounced in small to moderate samples. The usual covariances of the maximum likelihood estimates can be corrected by these second-order covariances. 相似文献
18.
The continuous quadratic variation of asset return plays a critical role for high-frequency trading. However, the microstructure noise could bias the estimation of the continuous quadratic variation. Zhang et al. (2005) proposed a batch estimator for the continuous quadratic variation of high-frequency data in the presence of microstructure noise. It gives the estimates after all the data arrive. This article proposes a recursive version of their estimator that outputs variation estimates as the data arrive. Our estimator gives excellent estimates well before all the data arrive. Both real high-frequency futures data and simulation data confirm the performance of the recursive estimator. 相似文献
19.
Tony Vangeneugden Geert Molenberghs Geert Verbeke Clarice G.B. Demétrio 《统计学通讯:理论与方法》2014,43(19):4164-4178
In hierarchical data settings, be it of a longitudinal, spatial, multi-level, clustered, or otherwise repeated nature, often the association between repeated measurements attracts at least part of the scientific interest. Quantifying the association frequently takes the form of a correlation function, including but not limited to intraclass correlation. Vangeneugden et al. (2010) derived approximate correlation functions for longitudinal sequences of general data type, Gaussian and non-Gaussian, based on generalized linear mixed-effects models. Here, we consider the extended model family proposed by Molenberghs et al. (2010). This family flexibly accommodates data hierarchies, intra-sequence correlation, and overdispersion. The family allows for closed-form means, variance functions, and correlation function, for a variety of outcome types and link functions. Unfortunately, for binary data with logit link, closed forms cannot be obtained. This is in contrast with the probit link, for which such closed forms can be derived. It is therefore that we concentrate on the probit case. It is of interest, not only in its own right, but also as an instrument to approximate the logit case, thanks to the well-known probit-logit ‘conversion.’ Next to the general situation, some important special cases such as exchangeable clustered outcomes receive attention because they produce insightful expressions. The closed-form expressions are contrasted with the generic approximate expressions of Vangeneugden et al. (2010) and with approximations derived for the so-called logistic-beta-normal combined model. A simulation study explores performance of the method proposed. Data from a schizophrenia trial are analyzed and correlation functions derived. 相似文献
20.
In this research, multiple dependent state and repetitive group sampling are used to design a variable sampling plan based on one-sided process capability indices, which consider the quality of the current lot as well as the quality of the preceding lots. The sample size and critical values of the proposed plan are determined by minimizing the average sample number while satisfying the producer's risk and consumer's risk at corresponding quality levels. In addition, comparisons are made with the existing sampling plans [Pearn and Wu (2006a), Yen et al. (2015)] in terms of average sample number and operating characteristic curve. Finally, an example is provided to illustrate the proposed plan. 相似文献