首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
In this article, we consider two different shared frailty regression models under the assumption of Gompertz as baseline distribution. Mostly assumption of gamma distribution is considered for frailty distribution. To compare the results with gamma frailty model, we consider the inverse Gaussian shared frailty model also. We compare these two models to a real life bivariate survival data set of acute leukemia remission times (Freireich et al., 1963 Freireich, E.J., Gehan, E., Frei, E., Schroeder, L.R., Wolman, I.J., Anbari, R., Burgert, E.O., Mills, S.D., Pinkel, D., Selawry, O.S., Moon, J.H., Gendel, B.R., Spurr, C.L., Storrs, R., Haurani, F., Hoogstraten, B., Lee, S. (1963). The effect of 6-mercaptopurine on the duration of steroid-induced remissions in acute leukemia: a model for evaluation of other potentially useful therapy. Blood 21:699716.[Web of Science ®] [Google Scholar]). Analysis is performed using Markov Chain Monte Carlo methods. Model comparison is made using Bayesian model selection criterion and a well-fitted model is suggested for the acute leukemia data.  相似文献   

2.
Mudholkar and Srivastava [1] Mudholkar, G. S. and Srivastava, D. K. A class of robust stepwise alternatives to Hotelling's T2tests. Submitted to the Journal of Applied Statistics 1999 [Google Scholar]adapted Mudholkar and Subbaiah's [2] Mudholkar, G. S. and Subbaiah, P. 1980. Testing significance of a mean vector–a possible alternative to Hotelling's T2. Ann. Inst. Statist. Math., 32(A): 4352.  [Google Scholar]modified stepwise procedure, using the trimmed means in place of the means and appropriate studentization, to construct robust tests for the significance of a mean vector. They concluded that the robust alternatives provide excellent type I error control, and a substantial gain in power over Hotelling's T 2test in case of heavy tailed populations without significant loss of power when the population is normal. In this paper we adapt the modified stepwise approach to construct simple tests for the significance of the orthant constrained mean vector of a p-variate normal population with unknown covariance matrix, and also for constructing robust tests without assuming normality. The simple normal theory tests have exact type I error, whereas the robust tests provide a reasonably type I error control and substantial power advantage over Perlman's [3] Perlman, M. D. 1969. One-sided testing problems in multivariate analysis. Annals of Mathematical Statistics, 40: 549567. [Crossref] [Google Scholar]likelihood ratio test.  相似文献   

3.
In hierarchical data settings, be it of a longitudinal, spatial, multi-level, clustered, or otherwise repeated nature, often the association between repeated measurements attracts at least part of the scientific interest. Quantifying the association frequently takes the form of a correlation function, including but not limited to intraclass correlation. Vangeneugden et al. (2010 Vangeneugden, T., Molenberghs, G., Laenen, A., Geys, H., Beunckens, C., Sotto, C. (2010). Marginal correlation in longitudinal binary data based on generalized linear mixed models. Communi. Stati. Theory &; Methods. 39:35423557. [Google Scholar]) derived approximate correlation functions for longitudinal sequences of general data type, Gaussian and non-Gaussian, based on generalized linear mixed-effects models. Here, we consider the extended model family proposed by Molenberghs et al. (2010 Molenberghs, G., Verbeke, G., Demétrio, C., Vieira, A. (2010). A family of generalized linear models for repeated measures with normal and conjugate random effects. Stat. Sci. 25:325347.[Crossref], [Web of Science ®] [Google Scholar]). This family flexibly accommodates data hierarchies, intra-sequence correlation, and overdispersion. The family allows for closed-form means, variance functions, and correlation function, for a variety of outcome types and link functions. Unfortunately, for binary data with logit link, closed forms cannot be obtained. This is in contrast with the probit link, for which such closed forms can be derived. It is therefore that we concentrate on the probit case. It is of interest, not only in its own right, but also as an instrument to approximate the logit case, thanks to the well-known probit-logit ‘conversion.’ Next to the general situation, some important special cases such as exchangeable clustered outcomes receive attention because they produce insightful expressions. The closed-form expressions are contrasted with the generic approximate expressions of Vangeneugden et al. (2010 Vangeneugden, T., Molenberghs, G., Laenen, A., Geys, H., Beunckens, C., Sotto, C. (2010). Marginal correlation in longitudinal binary data based on generalized linear mixed models. Communi. Stati. Theory &; Methods. 39:35423557. [Google Scholar]) and with approximations derived for the so-called logistic-beta-normal combined model. A simulation study explores performance of the method proposed. Data from a schizophrenia trial are analyzed and correlation functions derived.  相似文献   

4.
We adopt boosting for classification and selection of high-dimensional binary variables for which classical methods based on normality and non singular sample dispersion are inapplicable. Boosting seems particularly well suited for binary variables. We present three methods of which two combine boosting with the relatively classical variable selection methods developed in Wilbur et al. (2002 Wilbur , J. D. , Ghosh , J. K. , Nakatsu , C. H. , Brouder , S. M. , Doerge , R. W. ( 2002 ). Variable selection in high-dimensional multivariate binary data with application to the analysis of microbial community DNA fingerprints . Biometrics 58 : 378386 . [Google Scholar]). Our primary interest is variable selection in classification with small misclassification error being used as validation of proposed method for variable selection. Two of the new methods perform uniformly better than Wilbur et al. (2002 Wilbur , J. D. , Ghosh , J. K. , Nakatsu , C. H. , Brouder , S. M. , Doerge , R. W. ( 2002 ). Variable selection in high-dimensional multivariate binary data with application to the analysis of microbial community DNA fingerprints . Biometrics 58 : 378386 . [Google Scholar]) in one set of simulated and three real life examples.  相似文献   

5.
In many genetic analyses of dichotomous twin data, odds ratios have been used to test hypotheses on heritability and shared common environment effects of a given disease (Lichtenstein et al., 2000 Lichtenstein , P. , Holm , N. , Verkasalo , P. , Iliadou , A. , Kaprio , J. , Koskenvuo , M. , Pukkala , E. , Skytthe , A. , Hemminki , K. ( 2000 ). Environmental and heritable factors in the causation of cancer . New England Journal of Medicine 343 : 7885 .[Crossref], [Web of Science ®] [Google Scholar]; Ahlbom et al., 1997 Ahlbom , A. , Lichtenstein , P. , Malmström , H. , Feychting , M. , Hemminki , K. , Pedersen , N. L. ( 1997 ). Cancer in twins: genetic and nongenetic familial risk factors . Journal of the National Cancer Institute 89 : 28793 . [Google Scholar]; Ramakrishnan et al., 1992 Ramakrishnan , V. , Goldberg , J. , Henderson , W. , Elsen , S. , True , W. , Lyons , M. , Tsuang , M. ( 1992 ). Elementary methods for the analysis of dichotomous outcomes in unselected samples of twins . Genetic Epidemiology 9 : 273287 . [Google Scholar], 4). However, estimates of these two effects have not been dealt with in the literature. In epidemiology, the attributable fraction (AF), a function of the odds ratio and the prevalence of the risk factor has been used to describe the contribution of a risk factor to a disease in a given population (Leviton, 1973 Leviton , A. ( 1973 ). Definitions of attributable risk . American Journal of Epidemiology 98 : 231 . [Google Scholar]). In this article, we adapt the AF to quantify the heritability and the shared common environment. Twin data on cancer, gallstone disease and phobia are used to illustrate the applicability of the AF estimate as a measure of heritability.  相似文献   

6.
Vangeneugden et al. [15 Vangeneugden, T., Molenberghs, G., Laenen, A., Geys, H., Beunckens, C. and Sotto, C. 2007. Marginal correlation in longitudinal binary data based on generalized linear mixed models, Tech. Rep., Hasselt University. submitted for publication [Google Scholar]] derived approximate correlation functions for longitudinal sequences of general data type, Gaussian and non-Gaussian, based on generalized linear mixed-effects models (GLMM). Their focus was on binary sequences, as well as on a combination of binary and Gaussian sequences. Here, we focus on the specific case of repeated count data, important in two respects. First, we employ the model proposed by Molenberghs et al. [13 Molenberghs, G., Verbeke, G. and Demétrio, C. G.B. 2007. An extended random-effects approach to modeling repeated, overdispersed count data. Lifetime Data Anal., 13: 513531. [Crossref], [PubMed], [Web of Science ®] [Google Scholar]], which generalizes at the same time the Poisson-normal GLMM and the conventional overdispersion models, in particular the negative-binomial model. The model flexibly accommodates data hierarchies, intra-sequence correlation, and overdispersion. Second, means, variances, and joint probabilities can be expressed in closed form, allowing for exact intra-sequence correlation expressions. Next to the general situation, some important special cases such as exchangeable clustered outcomes are considered, producing insightful expressions. The closed-form expressions are contrasted with the generic approximate expressions of Vangeneugden et al. [15 Vangeneugden, T., Molenberghs, G., Laenen, A., Geys, H., Beunckens, C. and Sotto, C. 2007. Marginal correlation in longitudinal binary data based on generalized linear mixed models, Tech. Rep., Hasselt University. submitted for publication [Google Scholar]]. Data from an epileptic-seizures trial are analyzed and correlation functions derived. It is shown that the proposed extension strongly outperforms the classical GLMM.  相似文献   

7.
This paper considers the estimation of parameters of AR(p) models for time series with t-distribution via EM-based algorithms. The paper develops asymptotic properties for the estimation to show that the estimators are efficient. Also testing theory for the estimators is considered. The robustness of the estimators and various tests to deviations from an assumed model is investigated. The study shows that the algorithms have equal estimation efficiency even if the error distribution is miss-specified or perturbed by outliers. Interestingly, the estimators from these algorithms performed better than that of the Modified Maximum Likelihood (MML) considered in Tiku et al. (2000 Tiku, M. L., Wong, W. K., Vaughan, D. C., Bian, G. (2000). Time series models in non-normal situations: Symmetric innovations. Journal of Time Series Analysis, 21: 571596. [Google Scholar]).  相似文献   

8.
Abstract

Micheas and Dey (2003 Micheas , A. C. , Dey , D. K. ( 2003 ). Prior and posterior predictive p -values in the one-sided location parameter testing problem. Sankhya¯ 65 : 158178 . [Google Scholar]) reconciled classical and Bayesian p-values in the one-sided location parameter testing problem. In this article, the classical p-value is reconciled with the prior predictive p-value, for the two-sided location parameter testing problem, proving that the classical p-value coincides with the infimum of prior predictive p-values when the prior ranges in different classes of priors.  相似文献   

9.
New drug discovery in the pediatrics has dramatically improved survival, but with long- term adverse events. This motivates the examination of adverse outcomes such as long-term toxicity in a phase IV trial. An ideal approach to monitor long-term toxicity is to systematically follow the survivors, which is generally not feasible. Instead, cross-sectional surveys are conducted in Hudson et al. (2007 Hudson , M. M. , Rai , S. N. , Nunez , C. , Merchant , T. E. , Marina , N. M. , Zalamea , N. , Cox , C. , Phipps , S. , Pompeu , R. , Rosenthal , D. ( 2007 ). Noninvasive evaluation of late anthracycline cardiac toxicity in childhood cancer survivors . J. Clin. Oncol. 25 : 36353643 .[Crossref], [PubMed], [Web of Science ®] [Google Scholar]), with one of the objectives to estimate the cumulative incidence rates along with specific interest in fixed-term (5 or 10 year) rates. We present inference procedures based on current status data to our motivating example with very interesting findings.  相似文献   

10.
Zero-inflated Poisson mixed regression models are popular approaches to analyze clustered count data with excess zeros. Prior to application of these models, it is essential to examine the necessity of the adjustment for zero outcomes. The existing literature, however, has focused only on score tests for testing the suitability of zero-inflated models for correlated count data. In view of the observed bias and non-optimal size of score tests, it deserves further investigation of other alternative ways for the test. This article aims to explore the use of the null Wald and likelihood ratio tests for zero-inflation in correlated count data. Our simulation study shows that both the null Wald and likelihood ratio tests outperform the score test of Xiang et al. (2006 Xiang , L. , Lee , A. H. , Yau , K. K. W. , McLachlan , G. J. ( 2006 ). A score test for zero-inflation in correlated count data . Statistics in Medicine 25 : 16601671 . [Google Scholar]) in terms of statistical power, regardless of the computational convenience of the score test. A bootstrap null Wald statistic is also proposed, which results in improved performance in terms of the size and power of the test.  相似文献   

11.
In Bielecki et al. (2014a Bielecki , T. R. , Cousin , A. , Crépey , S. , Herbertsson , A. ( 2014a ). Dynamic hedging of portfolio credit risk in a markov copula model . J. Optimiz. Theor. Applic . doi: DOI 10.1007/s10957-013-0318-4 (forthcoming) .[Crossref] [Google Scholar]), the authors introduced a Markov copula model of portfolio credit risk where pricing and hedging can be done in a sound theoretical and practical way. Further theoretical backgrounds and practical details are developed in Bielecki et al. (2014b Bielecki , T. R. , Cousin , A. , Crépey , S. , Herbertsson , A. ( 2014b ). A bottom-up dynamic model of portfolio credit risk - Part I: Markov copula perspective . In: Recent Adv. Fin. Eng. 2012 , World Scientific (preprint version available at http://dx.doi.org/10.2139/ssrn.1844574) . [Google Scholar],c) where numerical illustrations assumed deterministic intensities and constant recoveries. In the present paper, we show how to incorporate stochastic default intensities and random recoveries in the bottom-up modeling framework of Bielecki et al. (2014a Bielecki , T. R. , Cousin , A. , Crépey , S. , Herbertsson , A. ( 2014a ). Dynamic hedging of portfolio credit risk in a markov copula model . J. Optimiz. Theor. Applic . doi: DOI 10.1007/s10957-013-0318-4 (forthcoming) .[Crossref] [Google Scholar]) while preserving numerical tractability. These two features are of primary importance for applications like CVA computations on credit derivatives (Assefa et al., 2011 Assefa , S. , Bielecki , T. R. , Crépey , S. , Jeanblanc , M. ( 2011 ). CVA computation for counterparty risk assessment in credit portfolios . In: Bielecki , T.R. , Brigo , D. , Patras , F. , Eds., Credit Risk Frontiers . Hoboken : Wiley/Bloomberg-Press . [Google Scholar]; Bielecki et al., 2012 Bielecki , T. R. , Crépey , S. , Jeanblanc , M. , Zargari , B. ( 2012 ). Valuation and Hedging of CDS counterparty exposure in a markov copula model . Int. J. Theoret. Appl. Fin. 15 ( 1 ): 1250004 .[Crossref] [Google Scholar]), as CVA is sensitive to the stochastic nature of credit spreads and random recoveries allow to achieve satisfactory calibration even for “badly behaved” data sets. This article is thus a complement to Bielecki et al. (2014a Bielecki , T. R. , Cousin , A. , Crépey , S. , Herbertsson , A. ( 2014a ). Dynamic hedging of portfolio credit risk in a markov copula model . J. Optimiz. Theor. Applic . doi: DOI 10.1007/s10957-013-0318-4 (forthcoming) .[Crossref] [Google Scholar]), Bielecki et al. (2014b Bielecki , T. R. , Cousin , A. , Crépey , S. , Herbertsson , A. ( 2014b ). A bottom-up dynamic model of portfolio credit risk - Part I: Markov copula perspective . In: Recent Adv. Fin. Eng. 2012 , World Scientific (preprint version available at http://dx.doi.org/10.2139/ssrn.1844574) . [Google Scholar]) and Bielecki et al. (2014c Bielecki , T. R. , Cousin , A. , Crépey , S. , Herbertsson , A. ( 2014c ). A bottom-up dynamic model of portfolio credit risk - Part II: Common-shock interpretation, calibration and hedging issues . Recent Adv. Fin. Eng. 2012 , World Scientific (preprint version available at http://dx.doi.org/10.2139/ssrn.2245130) . [Google Scholar]).  相似文献   

12.
《统计学通讯:理论与方法》2012,41(16-17):3162-3178
In this article we use a new methodology, based on algebraic strata, to generate the class of all the orthogonal arrays of given size and strength. From this class we extract all the non isomorphic orthogonal arrays. Then, using all these non isomorphic orthogonal arrays, we suggest a method based on the inequivalent matrices permutations testing procedures Basso et al. (2004 Basso , D. , Evangelaras , H. , Koukouvinos , C. , Salmaso , L. ( 2004 ). Nonparametric testing for main effects on inequivalent designs. Proc. 7th Int. Workshop Model-Oriented Design Anal. Heeze, Netherlands, June 14–18 . [Google Scholar]) in order to obtain separate permutation tests for the effects in unreplicated mixed level fractional factorial designs. In order to validate the proposed method we perform a Monte Carlo simulation study and find out that the permutation tests appear to be a valid solution for testing effects, in particular when the usual normality assumptions cannot be justified.  相似文献   

13.
The Significance Analysis of Microarrays (SAM; Tusher et al., 2001 Tusher , V. G. , Tibshirani , R. , Chu , G. ( 2001 ). Significance analysis of microarrys applied to the ionizing radiation response . Proceedings of the National Academy of Sciences 98 : 51165121 .[Crossref], [PubMed], [Web of Science ®] [Google Scholar]) method is widely used in analyzing gene expression data while controlling the FDR by using resampling-based procedure in the microarray setting. One of the main components of the SAM procedure is the adjustment of the test statistic. The introduction of the fudge factor to the test statistic aims at deflating the large value of test statistics due to the small standard error of gene-expression. Lin et al. (2008 Lin , D. , Shkedy , Z. , Burzykowski , T. , Göhlmann , H. W. H. , De Bondt , A. , Perera , T. , Geerts , T. , Bijnens , L. ( 2008 ). Significance analysis of microarray (SAM) for comparisons of several treatments with one control . Biometric Journal, MCP 50 ( 5 ): 801823 .[Crossref], [PubMed], [Web of Science ®] [Google Scholar]) pointed out that the fudge factor does not effectively improve the power and the control of the FDR as compared to the SAM procedure without the fudge factor in the presence of small variance genes. Motivated by the simulation results presented in Lin et al. (2008 Lin , D. , Shkedy , Z. , Burzykowski , T. , Göhlmann , H. W. H. , De Bondt , A. , Perera , T. , Geerts , T. , Bijnens , L. ( 2008 ). Significance analysis of microarray (SAM) for comparisons of several treatments with one control . Biometric Journal, MCP 50 ( 5 ): 801823 .[Crossref], [PubMed], [Web of Science ®] [Google Scholar]), in this article, we extend our study to compare several methods for choosing the fudge factor in the modified t-type test statistics and use simulation studies to investigate the power and the control of the FDR of the considered methods.  相似文献   

14.
This paper is based on the application of a Bayesian model to a clinical trial study to determine a more effective treatment to lower mortality rates and consequently to increase survival times among patients with lung cancer. In this study, Qian et al. [13 J. Qian, D.K. Stangl, and S. George, A Weibull model for survival data: Using prediction to decide when to stop a clinical trial, in Bayesian Biostatistics, D. Berry and D. Stangl, eds., Marcel Dekker, New York, 1996, pp. 187205. [Google Scholar]] strived to determine if a Weibull survival model can be used to decide whether to stop a clinical trial. The traditional Gibbs sampler was used to estimate the model parameters. This paper proposes to use the independent steady-state Gibbs sampling (ISSGS) approach, introduced by Dunbar et al. [3 M. Dunbar, H.M. Samawi, R. Vogel, and L. Yu, A more efficient Gibbs sampler estimation using steady state simulation: Application to public health studies, J. Stat. Simul. Comput. 10.1080/00949655.2013.770857.[Taylor &; Francis Online] [Google Scholar]], to improve the original Gibbs sampler in multidimensional problems. It is demonstrated that ISSGS provides accuracy with unbiased estimation and improves the performance and convergence of the Gibbs sampler in this application.  相似文献   

15.
Simard et al. [16 Simard, P. Y., LeCun, Y., Denker, J. S. and Victorri, B. 2000. Transformation invariance in pattern recognition: Tangent distance and tangent propagation. J. Imaging Syst. Technol., 11: 181197.  [Google Scholar] 17 Sona, D., Sperduti, A. and Starita, A. 1997. A constructive learning algorithm for discriminant tangent models. Advances in Neural Information Processing Systems. 1997, Cambridge, MA. Edited by: Mozer, M. C., Jordan, M. I. and Petsche, T. Vol. 9, pp.786792. MIT Press.  [Google Scholar]] proposed a transformation distance called “tangent distance” (TD) which can make pattern recognition be efficient. The key idea is to construct a distance measure which is invariant with respect to some chosen transformations. In this research, we provide a method using adaptive TD based on an idea inspired by “discriminant adaptive nearest neighbor” [7 Hastie, T., Tibshirani, R. and Friedman, J. 2009. The Elements of Statistical Learning, Data Mining, Inference, and Prediction, 2, New York, Berlin, Heidelberg: Springer. Available at http://www-stat.stanford.edu/ElemStatLearn [Google Scholar]]. This method is relatively easy compared with many other complicated ones. A real handwritten recognition data set is used to illustrate our new method. Our results demonstrate that the proposed method gives lower classification error rates than those by standard implementation of neural networks and support vector machines and is as good as several other complicated approaches.  相似文献   

16.
This study is mainly concerned with estimating a shift parameter in the two-sample location problem. The proposed Smoothed Mann–Whitney–Wilcoxon method smooths the empirical distribution functions of each sample by using convolution technique, and it replaces unknown distribution functions F(x) and G(x ? Δ0) with the new smoothed distribution functions F s (x) and G s (x ? Δ0), respectively. The unknown shift parameter Δ0 is estimated by solving the gradient function S n (Δ) with respect to an arbitrary variable Δ. The asymptotic properties of the new estimator are established under some conditions that are similar to the Generalized Wilcoxon procedure proposed by Anderson and Hettmansperger (1996 Anderson , G. F. , Hettmansperger , T. P. ( 1996 ). Generalized Wilcoxon methods for the one and two-sample location models . In: Brunner , E. , Denker , M. , eds. Research Developments in Probability and Statistics: Festschrift in Honor of Madan L. Puri on the Occasion of his 65th Birthday . Zeist, The Netherlands : VSP BV , pp. 303317 . [Google Scholar]). Some of these properties are asymptotic normality, asymptotic level confidence interval, and hypothesis testing for Δ0. Asymptotic relative efficiency of the proposed method with respect to the least squares, Generalized Wilcoxon and Hodges and Lehmann (1963 Hodges , J. L. , Lehmann , E. L. ( 1963 ). Estimates of location based on rank tests . Ann. Mathemat. Statist. 34 : 598611 .[Crossref] [Google Scholar]) procedures are also calculated under the contaminated normal model.  相似文献   

17.
Sanaullah et al. (2014 Sanaullah, A., Ali, H.M., Noor ul Amin, M., Hanif, M. (2014). Generalized exponential chain ratio estimators under stratified two-phase random sampling. Appl. Math. Comput. 226:541547.[Crossref], [Web of Science ®] [Google Scholar]) have suggested generalized exponential chain ratio estimators under stratified two-phase sampling scheme for estimating the finite population mean. However, the bias and mean square error (MSE) expressions presented in that work need some corrections, and consequently the study based on efficiency comparison also requires corrections. In this article, we revisit Sanaullah et al. (2014 Sanaullah, A., Ali, H.M., Noor ul Amin, M., Hanif, M. (2014). Generalized exponential chain ratio estimators under stratified two-phase random sampling. Appl. Math. Comput. 226:541547.[Crossref], [Web of Science ®] [Google Scholar]) estimator and provide the correct bias and MSE expressions of their estimator. We also propose an estimator which is more efficient than several competing estimators including the classes of estimators in Sanaullah et al. (2014 Sanaullah, A., Ali, H.M., Noor ul Amin, M., Hanif, M. (2014). Generalized exponential chain ratio estimators under stratified two-phase random sampling. Appl. Math. Comput. 226:541547.[Crossref], [Web of Science ®] [Google Scholar]). Three real datasets are used for efficiency comparisons.  相似文献   

18.
Many articles which have estimated models with forward looking expectations have reported that the magnitude of the coefficients of the expectations term is very large when compared with the effects coming from past dynamics. This has sometimes been regarded as implausible and led to the feeling that the expectations coefficient is biased upwards. A relatively general argument that has been advanced is that the bias could be due to structural changes in the means of the variables entering the structural equation. An alternative explanation is that the bias comes from weak instruments. In this article, we investigate the issue of upward bias in the estimated coefficients of the expectations variable based on a model where we can see what causes the breaks and how to control for them. We conclude that weak instruments are the most likely cause of any bias and note that structural change can affect the quality of instruments. We also look at some empirical work in Castle et al. (2014 Castle, J. L., Doornik, J. A., Hendry, D. F., Nymoen, R. (2014). Misspecification testing: non-invariance of expectations models of inflation. Econometric Reviews 33:56, 553574, doi:10.1080/07474938.2013.825137[Taylor &; Francis Online], [Web of Science ®] [Google Scholar]) on the new Kaynesian Phillips curve (NYPC) in the Euro Area and U.S. assessing whether the smaller coefficient on expectations that Castle et al. (2014 Castle, J. L., Doornik, J. A., Hendry, D. F., Nymoen, R. (2014). Misspecification testing: non-invariance of expectations models of inflation. Econometric Reviews 33:56, 553574, doi:10.1080/07474938.2013.825137[Taylor &; Francis Online], [Web of Science ®] [Google Scholar]) highlight is due to structural change. Our conclusion is that it is not. Instead it comes from their addition of variables to the NKPC. After allowing for the fact that there are weak instruments in the estimated re-specified model, it would seem that the forward coefficient estimate is actually quite high rather than low.  相似文献   

19.
Lindeman et al. [12 Lindeman, R. H., Merenda, P. F. and Gold, R. Z. 1980. Introduction to Bivariate and Multivariate Analysis, Glenview, IL: Scott Foresman.  [Google Scholar]] provide a unique solution to the relative importance of correlated predictors in multiple regression by averaging squared semi-partial correlations obtained for each predictor across all p! orderings. In this paper, we propose a series of predictor sensitivity statistics that complement the variance decomposition procedure advanced by Lindeman et al. [12 Lindeman, R. H., Merenda, P. F. and Gold, R. Z. 1980. Introduction to Bivariate and Multivariate Analysis, Glenview, IL: Scott Foresman.  [Google Scholar]]. First, we detail the logic of averaging over orderings as a technique of variance partitioning. Second, we assess predictors by conditional dominance analysis, a qualitative procedure designed to overcome defects in the Lindeman et al. [12 Lindeman, R. H., Merenda, P. F. and Gold, R. Z. 1980. Introduction to Bivariate and Multivariate Analysis, Glenview, IL: Scott Foresman.  [Google Scholar]] variance decomposition solution. Third, we introduce a suite of indices to assess the sensitivity of a predictor to model specification, advancing a series of sensitivity-adjusted contribution statistics that allow for more definite quantification of predictor relevance. Fourth, we describe the analytic efficiency of our proposed technique against the Budescu conditional dominance solution to the uneven contribution of predictors across all p! orderings.  相似文献   

20.
Adaptive designs find an important application in the estimation of unknown percentiles for an underlying dose-response curve. A nonparametric adaptive design was suggested by Mugno et al. (2004 Mugno, R.A., Zhus, W., Rosenberger, W.F. (2004). Adaptive urn designs for estimating several percentiles of a dose-response curve. Statist. Med. 23(13):21372150.[Crossref], [PubMed], [Web of Science ®] [Google Scholar]) to simultaneously estimate multiple percentiles of an unknown dose-response curve via generalized Polya urns. In this article, we examine the properties of the design proposed by Mugno et al. (2004 Mugno, R.A., Zhus, W., Rosenberger, W.F. (2004). Adaptive urn designs for estimating several percentiles of a dose-response curve. Statist. Med. 23(13):21372150.[Crossref], [PubMed], [Web of Science ®] [Google Scholar]) when delays in observing responses are encountered. Using simulations, we evaluate a modification of the design under varying group sizes. Our results demonstrate unbiased estimation with minimal loss in efficiency when compared to the original compound urn design.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号