首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
In this article, we propose a robust statistical approach to select an appropriate error distribution, in a classical multiplicative heteroscedastic model. In a first step, unlike to the traditional approach, we do not use any GARCH-type estimation of the conditional variance. Instead, we propose to use a recently developed nonparametric procedure [31 D. Mercurio and V. Spokoiny, Statistical inference for time-inhomogeneous volatility models, Ann. Stat. 32 (2004), pp. 577602.[Crossref], [Web of Science ®] [Google Scholar]]: the local adaptive volatility estimation. The motivation for using this method is to avoid a possible model misspecification for the conditional variance. In a second step, we suggest a set of estimation and model selection procedures (Berk–Jones tests, kernel density-based selection, censored likelihood score, and coverage probability) based on the so-obtained residuals. These methods enable to assess the global fit of a set of distributions as well as to focus on their behaviour in the tails, giving us the capacity to map the strengths and weaknesses of the candidate distributions. A bootstrap procedure is provided to compute the rejection regions in this semiparametric context. Finally, we illustrate our methodology throughout a small simulation study and an application on three time series of daily returns (UBS stock returns, BOVESPA returns and EUR/USD exchange rates).  相似文献   

2.
There is an emerging consensus in empirical finance that realized volatility series typically display long range dependence with a memory parameter (d) around 0.4 (Andersen et al., 2001 Andersen , T. G. , Bollerslev , T. , Diebold , F. X. , Labys , P. ( 2001 ). The distribution of realized exchange rate volatility . Journal of the American Statistical Association 96 ( 453 ): 4255 .[Taylor & Francis Online], [Web of Science ®] [Google Scholar]; Martens et al., 2004 Martnes , M. , Van Dijk , D. , De Pooter , M. ( 2004 ). Modeling and forecasting S&P 500 volatility: Long memory, structural breaks and nonlinearity. Tinbergen Institute Discussion Paper 2004-067/4 . [Google Scholar]). The present article provides some illustrative analysis of how long memory may arise from the accumulative process underlying realized volatility. The article also uses results in Lieberman and Phillips (2004 Lieberman , O. , Phillips , P. C. B. ( 2004 ). Expansions for the distribution of the maximum likelihood estimator of the fractional difference parameter . Econometric Theory 20 ( 3 ): 464484 . [Google Scholar], 2005 Lieberman , O. , Phillips , P. C. B. ( 2005 ). Expansions for approximate maximum likelihood estimators of the fractional difference parameter . The Econometrics Journal 8 : 367379 . [Google Scholar]) to refine statistical inference about d by higher order theory. Standard asymptotic theory has an O(n ?1/2) error rate for error rejection probabilities, and the theory used here refines the approximation to an error rate of o(n ?1/2). The new formula is independent of unknown parameters, is simple to calculate and user-friendly. The method is applied to test whether the reported long memory parameter estimates of Andersen et al. (2001 Andersen , T. G. , Bollerslev , T. , Diebold , F. X. , Labys , P. ( 2001 ). The distribution of realized exchange rate volatility . Journal of the American Statistical Association 96 ( 453 ): 4255 .[Taylor & Francis Online], [Web of Science ®] [Google Scholar]) and Martens et al. (2004 Martnes , M. , Van Dijk , D. , De Pooter , M. ( 2004 ). Modeling and forecasting S&P 500 volatility: Long memory, structural breaks and nonlinearity. Tinbergen Institute Discussion Paper 2004-067/4 . [Google Scholar]) differ significantly from the lower boundary (d = 0.5) of nonstationary long memory, and generally confirms earlier findings.  相似文献   

3.
This paper presents a new variable weight method, called the singular value decomposition (SVD) approach, for Kohonen competitive learning (KCL) algorithms based on the concept of Varshavsky et al. [18 R. Varshavsky, A. Gottlieb, M. Linial, and D. Horn, Novel unsupervised feature filtering of bilogical data, Bioinformatics 22 (2006), pp. 507513.[Crossref], [PubMed], [Web of Science ®] [Google Scholar]]. Integrating the weighted fuzzy c-means (FCM) algorithm with KCL, in this paper, we propose a weighted fuzzy KCL (WFKCL) algorithm. The goal of the proposed WFKCL algorithm is to reduce the clustering error rate when data contain some noise variables. Compared with the k-means, FCM and KCL with existing variable-weight methods, the proposed WFKCL algorithm with the proposed SVD's weight method provides a better clustering performance based on the error rate criterion. Furthermore, the complexity of the proposed SVD's approach is less than Pal et al. [17 S.K. Pal, R.K. De, and J. Basak, Unsupervised feature evaluation: a neuro-fuzzy approach, IEEE. Trans. Neural Netw. 11 (2000), pp. 366376.[Crossref], [PubMed], [Web of Science ®] [Google Scholar]], Wang et al. [19 X.Z. Wang, Y.D. Wang, and L.J. Wang, Improving fuzzy c-means clustering based on feature-weight learning, Pattern Recognit. Lett. 25 (2004), pp. 11231132.[Crossref], [Web of Science ®] [Google Scholar]] and Hung et al. [9 W. -L. Hung, M. -S. Yang, and D. -H. Chen, Bootstrapping approach to feature-weight selection in fuzzy c-means algorithms with an application in color image segmentation, Pattern Recognit. Lett. 29 (2008), pp. 13171325.[Crossref], [Web of Science ®] [Google Scholar]].  相似文献   

4.
This article is concerned with sphericity test for the two-way error components panel data model. It is found that the John statistic and the bias-corrected LM statistic recently developed by Baltagi et al. (2011 Baltagi, B. H., Feng, Q., Kao, C. (2011). Testing for sphericity in a fixed effects panel data model. Econometrics Journal 14:2547.[Crossref], [Web of Science ®] [Google Scholar])Baltagi et al. (2012 Baltagi, B. H., Feng, Q., Kao, C. (2012). A Lagrange multiplier test for cross-sectional dependence in a fixed effects panel data model. Journal of Econometrics 170:164177.[Crossref], [Web of Science ®] [Google Scholar], which are based on the within residuals, are not helpful under the present circumstances even though they are in the one-way fixed effects model. However, we prove that when the within residuals are properly transformed, the resulting residuals can serve to construct useful statistics that are similar to those of Baltagi et al. (2011 Baltagi, B. H., Feng, Q., Kao, C. (2011). Testing for sphericity in a fixed effects panel data model. Econometrics Journal 14:2547.[Crossref], [Web of Science ®] [Google Scholar])Baltagi et al. (2012 Baltagi, B. H., Feng, Q., Kao, C. (2012). A Lagrange multiplier test for cross-sectional dependence in a fixed effects panel data model. Journal of Econometrics 170:164177.[Crossref], [Web of Science ®] [Google Scholar]). Simulation results show that the newly proposed statistics perform well under the null hypothesis and several typical alternatives.  相似文献   

5.
In this article, a generalized Lévy model is proposed and its parameters are estimated in high-frequency data settings. An infinitesimal generator of Lévy processes is used to study the asymptotic properties of the drift and volatility estimators. They are consistent asymptotically and are independent of other parameters making them better than those in Chen et al. (2010 Chen, S. X., Delaigle, A., Hall, P. (2010). Nonparametric estimation for a class of Lévy processes. Journal of Econometrics 157:257271.[Crossref], [Web of Science ®] [Google Scholar]). The estimators proposed here also have fast convergence rates and are simple to implement.  相似文献   

6.
This article suggests random and fixed effects spatial two-stage least squares estimators for the generalized mixed regressive spatial autoregressive panel data model. This extends the generalized spatial panel model of Baltagi et al. (2013 Baltagi, B. H., Egger, P., Pfaffermayr, M. (2013). A generalized spatial panel data model with random effects. Econometric Reviews 32:650685.[Taylor &; Francis Online], [Web of Science ®] [Google Scholar]) by the inclusion of a spatial lag term. The estimation method utilizes the Generalized Moments method suggested by Kapoor et al. (2007 Kapoor, M., Kelejian, H. H., Prucha, I. R. (2007). Panel data models with spatially correlated error components. Journal of Econometrics 127(1):97130.[Crossref], [Web of Science ®] [Google Scholar]) for a spatial autoregressive panel data model. We derive the asymptotic distributions of these estimators and suggest a Hausman test a la Mutl and Pfaffermayr (2011 Mutl, J., Pfaffermayr, M. (2011). The Hausman test in a Cliff and Ord panel model. Econometrics Journal 14:4876.[Crossref], [Web of Science ®] [Google Scholar]) based on the difference between these estimators. Monte Carlo experiments are performed to investigate the performance of these estimators as well as the corresponding Hausman test.  相似文献   

7.
In this article, we investigate the relationships among intraday serial correlation, jump-robust volatility, positive and negative jumps based on Shanghai composite index high frequency data. We implement variance ratio test to quantify intraday serial correlation. We also measure the continuous part of realized volatility using jump-robust MedRV estimator and disentangle positive and negative jumps using Realized Downside Risk Measure and Realized Upside Potential Measure proposed by Bi et al., (2013 Bi, T., Zhang, B., Wu, H. (2013). Measuring downside risk using high frequency data–realized downside risk measure. Communications in Statistics–Simulation and Computation 42(4):741754.[Taylor &; Francis Online], [Web of Science ®] [Google Scholar]). We find that intraday serial correlation are positively correlated with jump-robust volatility and negatively correlated with negative jumps which confirm the LeBaron effect.  相似文献   

8.
This paper discusses the estimation of average treatment effects in observational causal inferences. By employing a working propensity score and two working regression models for treatment and control groups, Robins et al. (1994 Robins , J. M. , Rotnitzky , A. , Zhao , L. P. ( 1994 ). Estimation of regression coefficients when some regressors are not always observed . Journal of the American Statistical Association 89 : 846866 .[Taylor &; Francis Online], [Web of Science ®] [Google Scholar], 1995 Robins , J. M. , Rotnitzky , A. , Zhao , L. P. ( 1995 ). Analysis of semiparametric regression models for repeated outcomes in the presence of missing data . Journal of the American Statistical Association 90 : 106121 .[Taylor &; Francis Online], [Web of Science ®] [Google Scholar]) introduced the augmented inverse probability weighting (AIPW) method for estimation of average treatment effects, which extends the inverse probability weighting (IPW) method of Horvitz and Thompson (1952 Horvitz , D. G. , Thompson , D. J. ( 1952 ). A generalization of sampling without replacement from a finite universe . Journal of the American Statistical Association 47 : 663685 .[Taylor &; Francis Online], [Web of Science ®] [Google Scholar]); the AIPW estimators are locally efficient and doubly robust. In this paper, we study a hybrid of the empirical likelihood method and the method of moments by employing three estimating functions, which can generate estimators for average treatment effects that are locally efficient and doubly robust. The proposed estimators of average treatment effects are efficient for the given choice of three estimating functions when the working propensity score is correctly specified, and thus are more efficient than the AIPW estimators. In addition, we consider a regression method for estimation of the average treatment effects when working regression models for both the treatment and control groups are correctly specified; the asymptotic variance of the resulting estimator is no greater than the semiparametric variance bound characterized by the theory of Robins et al. (1994 Robins , J. M. , Rotnitzky , A. , Zhao , L. P. ( 1994 ). Estimation of regression coefficients when some regressors are not always observed . Journal of the American Statistical Association 89 : 846866 .[Taylor &; Francis Online], [Web of Science ®] [Google Scholar], 1995 Robins , J. M. , Rotnitzky , A. , Zhao , L. P. ( 1995 ). Analysis of semiparametric regression models for repeated outcomes in the presence of missing data . Journal of the American Statistical Association 90 : 106121 .[Taylor &; Francis Online], [Web of Science ®] [Google Scholar]). Finally, we present a simulation study to compare the finite-sample performance of various methods with respect to bias, efficiency, and robustness to model misspecification.  相似文献   

9.
A variety of statistical approaches have been suggested in the literature for the analysis of bounded outcome scores (BOS). In this paper, we suggest a statistical approach when BOSs are repeatedly measured over time and used as predictors in a regression model. Instead of directly using the BOS as a predictor, we propose to extend the approaches suggested in [16 E. Lesaffre, D. Rizopoulos, and R. Tsonaka, The logistics-transform for bounded outcome scores, Biostatistics 8 (2007), pp. 7285. doi: 10.1093/biostatistics/kxj034[Crossref], [PubMed], [Web of Science ®] [Google Scholar],21 M. Molas and E. Lesaffre, A comparison of the three random effects approaches to analyse repeated bounded outcome scores with an application in a stroke revalidation study, Stat. Med. 27 (2008), pp. 66126633. doi: 10.1002/sim.3432[Crossref], [PubMed], [Web of Science ®] [Google Scholar],28 R. Tsonaka, D. Rizopoulos, and E. Lesaffre, Power and sample size calculations for discrete bounded outcome scores, Stat. Med. 25 (2006), pp. 42414252. doi: 10.1002/sim.2679[Crossref], [PubMed], [Web of Science ®] [Google Scholar]] to a joint modeling setting. Our approach is illustrated on longitudinal profiles of multiple patients’ reported outcomes to predict the current clinical status of rheumatoid arthritis patients by a disease activities score of 28 joints (DAS28). Both a maximum likelihood as well as a Bayesian approach is developed.  相似文献   

10.
This article describes how diagnostic procedures were derived for symmetrical nonlinear regression models, continuing the work carried out by Cysneiros and Vanegas (2008 Cysneiros , F. J. A. , Vanegas , L. H. ( 2008 ). Residuals and their statistical properties in symmetrical nonlinear models . Statist. Probab. Lett. 78 : 32693273 .[Crossref], [Web of Science ®] [Google Scholar]) and Vanegas and Cysneiros (2010 Vanegas , L. H. , Cysneiros , F. J. A. ( 2010 ). Assesment of diagnostic procedures in symmetrical nonlinear regression models . Computat. Statist. Data Anal. 54 : 10021016 .[Crossref], [Web of Science ®] [Google Scholar]), who showed that the parameters estimates in nonlinear models are more robust with heavy-tailed than with normal errors. In this article, we focus on assessing if the robustness of this kind of models is also observed in the inference process (i.e., partial F-test). Symmetrical nonlinear regression models includes all symmetric continuous distributions for errors covering both light- and heavy-tailed distributions such as Student-t, logistic-I and -II, power exponential, generalized Student-t, generalized logistic, and contaminated normal. Firstly, a statistical test is shown to evaluating the assumption that the error terms all have equal variance. The results of simulation studies which describe the behavior of the test for heteroscedasticity proposed in the presence of outliers are then given. To assess the robustness of inference process, we present the results of a simulation study which described the behavior of partial F-test in the presence of outliers. Also, some diagnostic procedures are derived to identify influential observations on the partial F-test. As ilustration, a dataset described in Venables and Ripley (2002 Venables , W. N. , Ripley , B. D. ( 2002 ). Modern Applied with S. , 4th ed. New York : Springer .[Crossref] [Google Scholar]), is also analyzed.  相似文献   

11.
Classification and regression tree has been useful in medical research to construct algorithms for disease diagnosis or prognostic prediction. Jin et al. 7 Jin, H., Lu, Y., Harris, R. T., Black, D., Stone, K., Hochberg, M. and Genant, H. 2004. Classification algorithms for hip fracture prediction base on recursive partitioning methods. Med. Decis. Mak., 24: 386398. (doi:10.1177/0272989X04267009)[Crossref], [PubMed], [Web of Science ®] [Google Scholar] developed a robust and cost-saving tree (RACT) algorithm with application in classification of hip fracture risk after 5-year follow-up based on the data from the Study of Osteoporotic Fractures (SOF). Although conventional recursive partitioning algorithms have been well developed, they still have some limitations. Binary splits may generate a big tree with many layers, but trinary splits may produce too many nodes. In this paper, we propose a classification approach combining trinary splits and binary splits to generate a trinary–binary tree. A new non-inferiority test of entropy is used to select the binary or trinary splits. We apply the modified method in SOF to construct a trinary–binary classification rule for predicting risk of osteoporotic hip fracture. Our new classification tree has good statistical utility: it is statistically non-inferior to the optimum binary tree and the RACT based on the testing sample and is also cost-saving. It may be useful in clinical applications: femoral neck bone mineral density, age, height loss and weight gain since age 25 can identify subjects with elevated 5-year hip fracture risk without loss of statistical efficiency.  相似文献   

12.
In this paper, we consider a model for repeated count data, with within-subject correlation and/or overdispersion. It extends both the generalized linear mixed model and the negative-binomial model. This model, proposed in a likelihood context [17 G. Molenberghs, G. Verbeke, and C.G.B. Demétrio, An extended random-effects approach to modeling repeated, overdispersion count data, Lifetime Data Anal. 13 (2007), pp. 457511.[Web of Science ®] [Google Scholar],18 G. Molenberghs, G. Verbeke, C.G.B. Demétrio, and A. Vieira, A family of generalized linear models for repeated measures with normal and conjugate random effects, Statist. Sci. 25 (2010), pp. 325347. doi: 10.1214/10-STS328[Crossref], [Web of Science ®] [Google Scholar]] is placed in a Bayesian inferential framework. An important contribution takes the form of Bayesian model assessment based on pivotal quantities, rather than the often less adequate DIC. By means of a real biological data set, we also discuss some Bayesian model selection aspects, using a pivotal quantity proposed by Johnson [12 V.E. Johnson, Bayesian model assessment using pivotal quantities, Bayesian Anal. 2 (2007), pp. 719734. doi: 10.1214/07-BA229[Crossref], [Web of Science ®] [Google Scholar]].  相似文献   

13.
The potential observational equivalence between various types of nonlinearity and long memory has been recognized by the econometrics community since at least the contribution of Diebold and Inoue (2001 Diebold, F., Inoue, A. (2001). Long memory and regime switching. Journal of Econometrics 105:131159.[Crossref], [Web of Science ®] [Google Scholar]). A large literature has developed in an attempt to ascertain whether or not the long memory finding in many economic series is spurious. Yet to date, no study has analyzed the consequences of using long memory methods to test for unit roots when the “truth” derives from regime switching, structural breaks, or other types of mean reverting nonlinearity. In this article, I conduct a comprehensive Monte Carlo analysis to investigate the consequences of using tests designed to have power against fractional integration when the actual data generating process is unknown. I additionally consider the use of tests designed to have power against breaks and threshold nonlinearity. The findings are compelling and demonstrate that the use of long memory as an approximation to nonlinearity yields tests with relatively high power. In contrast, misspecification has severe consequences for tests designed to have power against threshold nonlinearity, and especially for tests designed to have power against breaks.  相似文献   

14.
The Significance Analysis of Microarrays (SAM; Tusher et al., 2001 Tusher , V. G. , Tibshirani , R. , Chu , G. ( 2001 ). Significance analysis of microarrys applied to the ionizing radiation response . Proceedings of the National Academy of Sciences 98 : 51165121 .[Crossref], [PubMed], [Web of Science ®] [Google Scholar]) method is widely used in analyzing gene expression data while controlling the FDR by using resampling-based procedure in the microarray setting. One of the main components of the SAM procedure is the adjustment of the test statistic. The introduction of the fudge factor to the test statistic aims at deflating the large value of test statistics due to the small standard error of gene-expression. Lin et al. (2008 Lin , D. , Shkedy , Z. , Burzykowski , T. , Göhlmann , H. W. H. , De Bondt , A. , Perera , T. , Geerts , T. , Bijnens , L. ( 2008 ). Significance analysis of microarray (SAM) for comparisons of several treatments with one control . Biometric Journal, MCP 50 ( 5 ): 801823 .[Crossref], [PubMed], [Web of Science ®] [Google Scholar]) pointed out that the fudge factor does not effectively improve the power and the control of the FDR as compared to the SAM procedure without the fudge factor in the presence of small variance genes. Motivated by the simulation results presented in Lin et al. (2008 Lin , D. , Shkedy , Z. , Burzykowski , T. , Göhlmann , H. W. H. , De Bondt , A. , Perera , T. , Geerts , T. , Bijnens , L. ( 2008 ). Significance analysis of microarray (SAM) for comparisons of several treatments with one control . Biometric Journal, MCP 50 ( 5 ): 801823 .[Crossref], [PubMed], [Web of Science ®] [Google Scholar]), in this article, we extend our study to compare several methods for choosing the fudge factor in the modified t-type test statistics and use simulation studies to investigate the power and the control of the FDR of the considered methods.  相似文献   

15.
This article considers estimation of Panel Vector Autoregressive Models of order 1 (PVAR(1)) with focus on fixed T consistent estimation methods in First Differences (FD) with additional strictly exogenous regressors. Additional results for the Panel FD ordinary least squares (OLS) estimator and the FDLS type estimator of Han and Phillips (2010 Han, C., Phillips, P. C. B. (2010). Gmm estimation for dynamic panels with fixed effects and strong instruments at unity. Econometric Theory 26:119151.[Crossref], [Web of Science ®] [Google Scholar]) are provided. Furthermore, we simplify the analysis of Binder et al. (2005 Binder, M., Hsiao, C., Pesaran, M. H. (2005). Estimation and inference in short panel vector autoregressions with unit root and cointegration. Econometric Theory 21:795837.[Crossref], [Web of Science ®] [Google Scholar]) by providing additional analytical results and extend the original model by taking into account possible cross-sectional heteroscedasticity and presence of strictly exogenous regressors. We show that in the three wave panel the log-likelihood function of the unrestricted Transformed Maximum Likelihood (TML) estimator might violate the global identification assumption. The finite-sample performance of the analyzed methods is investigated in a Monte Carlo study.  相似文献   

16.
Many articles which have estimated models with forward looking expectations have reported that the magnitude of the coefficients of the expectations term is very large when compared with the effects coming from past dynamics. This has sometimes been regarded as implausible and led to the feeling that the expectations coefficient is biased upwards. A relatively general argument that has been advanced is that the bias could be due to structural changes in the means of the variables entering the structural equation. An alternative explanation is that the bias comes from weak instruments. In this article, we investigate the issue of upward bias in the estimated coefficients of the expectations variable based on a model where we can see what causes the breaks and how to control for them. We conclude that weak instruments are the most likely cause of any bias and note that structural change can affect the quality of instruments. We also look at some empirical work in Castle et al. (2014 Castle, J. L., Doornik, J. A., Hendry, D. F., Nymoen, R. (2014). Misspecification testing: non-invariance of expectations models of inflation. Econometric Reviews 33:56, 553574, doi:10.1080/07474938.2013.825137[Taylor &; Francis Online], [Web of Science ®] [Google Scholar]) on the new Kaynesian Phillips curve (NYPC) in the Euro Area and U.S. assessing whether the smaller coefficient on expectations that Castle et al. (2014 Castle, J. L., Doornik, J. A., Hendry, D. F., Nymoen, R. (2014). Misspecification testing: non-invariance of expectations models of inflation. Econometric Reviews 33:56, 553574, doi:10.1080/07474938.2013.825137[Taylor &; Francis Online], [Web of Science ®] [Google Scholar]) highlight is due to structural change. Our conclusion is that it is not. Instead it comes from their addition of variables to the NKPC. After allowing for the fact that there are weak instruments in the estimated re-specified model, it would seem that the forward coefficient estimate is actually quite high rather than low.  相似文献   

17.
In application areas like bioinformatics, multivariate distributions on angles are encountered which show significant clustering. One approach to statistical modeling of such situations is to use mixtures of unimodal distributions. In the literature (Mardia et al., 2012 Mardia , K. V. , Kent , J. T. , Zhang , Z. , Taylor , C. , Hamelryck , T. ( 2012 ). Mixtures of concentrated multivariate sine distributions with applications to bioinformatics . J. Appl. Stat. 39 : 24752492 .[Taylor &; Francis Online], [Web of Science ®] [Google Scholar]), the multivariate von Mises distribution, also known as the multivariate sine distribution, has been suggested for components of such models, but work in the area has been hampered by the fact that no good criteria for the von Mises distribution to be unimodal were available. In this article we study the question about when a multivariate von Mises distribution is unimodal. We give sufficient criteria for this to be the case and show examples of distributions with multiple modes when these criteria are violated. In addition, we propose a method to generate samples from the von Mises distribution in the case of high concentration.  相似文献   

18.
The construction of some wider families of continuous distributions obtained recently has attracted applied statisticians due to the analytical facilities available for easy computation of special functions in programming software. We study some general mathematical properties of the log-gamma-generated (LGG) family defined by Amini, MirMostafaee, and Ahmadi (2014 Amini, M., S. M. T. K. MirMostafaee, and J. Ahmadi. 2014. Log-gamma-generated families of distributions. Statistics 48:91332.[Taylor &; Francis Online], [Web of Science ®] [Google Scholar]). It generalizes the gamma-generated class pioneered by Risti? and Balakrishnan (2012 Risti?, M. M., and N. Balakrishnan. 2012. The gamma exponentiated exponential distribution. Journal of Statistical Computation and Simulation 82:1191206.[Taylor &; Francis Online], [Web of Science ®] [Google Scholar]). We present some of its special models and derive explicit expressions for the ordinary and incomplete moments, generating and quantile functions, mean deviations, Bonferroni and Lorenz curves, Shannon entropy, Rényi entropy, reliability, and order statistics. Models in this family are compared with nested and non nested models. Further, we propose and study a new LGG family regression model. We demonstrate that the new regression model can be applied to censored data since it represents a parametric family of models and therefore can be used more effectively in the analysis of survival data. We prove that the proposed models can provide consistently better fits in some applications to real data sets.  相似文献   

19.
Noting that many economic variables display occasional shifts in their second order moments, we investigate the performance of homogenous panel unit root tests in the presence of permanent volatility shifts. It is shown that in this case the test statistic proposed by Herwartz and Siedenburg (2008 Herwartz, H., Siedenburg, F. (2008). Homogenous panel unit root tests under cross-sectional dependence: Finite sample modifications and the wild bootstrap. Computational Statistics and Data Analysis 53(1):137150.[Crossref], [Web of Science ®] [Google Scholar]) is asymptotically standard Gaussian. By means of a simulation study we illustrate the performance of first and second generation panel unit root tests and undertake a more detailed comparison of the test in Herwartz and Siedenburg (2008 Herwartz, H., Siedenburg, F. (2008). Homogenous panel unit root tests under cross-sectional dependence: Finite sample modifications and the wild bootstrap. Computational Statistics and Data Analysis 53(1):137150.[Crossref], [Web of Science ®] [Google Scholar]) and its heteroskedasticity consistent Cauchy counterpart introduced in Demetrescu and Hanck (2012a Demetrescu, M., Hanck, C. (2012a). A simple nonstationary-volatility robust panel unit root test. Economics Letters 117(2):1013.[Crossref], [Web of Science ®] [Google Scholar]). As an empirical illustration, we reassess evidence on the Fisher hypothesis with data from nine countries over the period 1961Q2–2011Q2. Empirical evidence supports panel stationarity of the real interest rate for the entire subperiod. With regard to the most recent two decades, the test results cast doubts on market integration, since the real interest rate is diagnosed nonstationary.  相似文献   

20.
For studying and modeling the time to failure of a system or component, many reliability practitioners used the hazard rate and its monotone behaviors. However, nowadays, there are two problems. First, the modern components have high reliability and, second, their distributions are usually have non monotone hazard rate, such as, the truncated normal, Burr XII, and inverse Gaussian distributions. So, modeling these data based on the hazard rate models seems to be too stringent. Zimmer et al. (1998 Zimmer , W. J. , Wang , Y. , Pathak , P. K. ( 1998 ). Log-odds rate and monotone log-odds rate distributions . J. Qual. Technol. 30 ( 4 ): 376385 .[Taylor &; Francis Online], [Web of Science ®] [Google Scholar]) and Wang et al. (2003 Wang , Y. , Hossain , A. M. , Zimmer , W. J. ( 2003 ). Monotone log-odds rate distributions in reliability analysis . Commun. Statist. Theor. Meth. 32 ( 11 ): 22272244 .[Taylor &; Francis Online], [Web of Science ®] [Google Scholar], 2008 Wang , Y. , Hossain , A. M. , Zimmer , W. J. ( 2008 ). Useful properties of the three-parameter Burr XII distribution. In: Ahsanullah M., Applied Statistics Research Progress. pp. 11–20 . [Google Scholar]) introduced and studied a new time to failure model in continuous distributions based on log-odds rate (LOR) which is comparable to the model based on the hazard rate.

There are many components and devices in industry, that have discrete distributions with non monotone hazard rate, so, in this article, we introduce the discrete log-odds rate which is different from its analog in continuous case. Also, an alternative discrete reversed hazard rate which we called it the second reversed rate of failure in discrete times is also defined here. It is shown that the failure time distributions can be characterized by the discrete LOR. Moreover, we show that the discrete logistic and log logistics distributions have property of a constant discrete LOR with respect to t and ln t, respectively. Furthermore, properties of some distributions with monotone discrete LOR, such as the discrete Burr XII, discrete Weibull, and discrete truncated normal are obtained.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号