共查询到20条相似文献,搜索用时 156 毫秒
1.
The log-Birnbaum-Saunders regression model introduced by Rieck and Nedelman (1991) is useful for modeling lifetimes of materials and equipments subject to different conditions. Our goal in this article is twofold. First, we numerically evaluate the finite sample performances of the likelihood ratio, score and Wald tests in the log-Birnbaum-Saunders regression model. Second, we introduce a RESET-like misspecification test for that model. The null hypothesis is that the model is correctly specified which is tested against the alternative hypothesis of model misspecification. The power of the test is evaluated using Monte Carlo simulations. Bootstrap-based inference is also considered. An empirical application is presented and discussed. 相似文献
2.
Considering the Wald, score, and likelihood ratio asymptotic test statistics, we analyze a multivariate null intercept errors-in-variables regression model, where the explanatory and the response variables are subject to measurement errors, and a possible structure of dependency between the measurements taken within the same individual are incorporated, representing a longitudinal structure. This model was proposed by Aoki et al. (2003b) and analyzed under the bayesian approach. In this article, considering the classical approach, we analyze asymptotic test statistics and present a simulation study to compare the behavior of the three test statistics for different sample sizes, parameter values and nominal levels of the test. Also, closed form expressions for the score function and the Fisher information matrix are presented. We consider two real numerical illustrations, the odontological data set from Hadgu and Koch (1999), and a quality control data set. 相似文献
3.
In this article, we consider two different shared frailty regression models under the assumption of Gompertz as baseline distribution. Mostly assumption of gamma distribution is considered for frailty distribution. To compare the results with gamma frailty model, we consider the inverse Gaussian shared frailty model also. We compare these two models to a real life bivariate survival data set of acute leukemia remission times (Freireich et al., 1963). Analysis is performed using Markov Chain Monte Carlo methods. Model comparison is made using Bayesian model selection criterion and a well-fitted model is suggested for the acute leukemia data. 相似文献
4.
Viswanathan Ramakrishnan 《统计学通讯:模拟与计算》2013,42(3):405-418
In many genetic analyses of dichotomous twin data, odds ratios have been used to test hypotheses on heritability and shared common environment effects of a given disease (Lichtenstein et al., 2000; Ahlbom et al., 1997; Ramakrishnan et al., 1992, 4). However, estimates of these two effects have not been dealt with in the literature. In epidemiology, the attributable fraction (AF), a function of the odds ratio and the prevalence of the risk factor has been used to describe the contribution of a risk factor to a disease in a given population (Leviton, 1973). In this article, we adapt the AF to quantify the heritability and the shared common environment. Twin data on cancer, gallstone disease and phobia are used to illustrate the applicability of the AF estimate as a measure of heritability. 相似文献
5.
Junyong Park Jayson D. Wilbur Jayanta K. Ghosh Cindy H. Nakatsu Corinne Ackerman 《统计学通讯:模拟与计算》2013,42(4):855-869
We adopt boosting for classification and selection of high-dimensional binary variables for which classical methods based on normality and non singular sample dispersion are inapplicable. Boosting seems particularly well suited for binary variables. We present three methods of which two combine boosting with the relatively classical variable selection methods developed in Wilbur et al. (2002). Our primary interest is variable selection in classification with small misclassification error being used as validation of proposed method for variable selection. Two of the new methods perform uniformly better than Wilbur et al. (2002) in one set of simulated and three real life examples. 相似文献
6.
The Significance Analysis of Microarrays (SAM; Tusher et al., 2001) method is widely used in analyzing gene expression data while controlling the FDR by using resampling-based procedure in the microarray setting. One of the main components of the SAM procedure is the adjustment of the test statistic. The introduction of the fudge factor to the test statistic aims at deflating the large value of test statistics due to the small standard error of gene-expression. Lin et al. (2008) pointed out that the fudge factor does not effectively improve the power and the control of the FDR as compared to the SAM procedure without the fudge factor in the presence of small variance genes. Motivated by the simulation results presented in Lin et al. (2008), in this article, we extend our study to compare several methods for choosing the fudge factor in the modified t-type test statistics and use simulation studies to investigate the power and the control of the FDR of the considered methods. 相似文献
7.
Tony Vangeneugden Geert Verbeke Clarice G.B. Demétrio 《Journal of applied statistics》2011,38(2):215-232
Vangeneugden et al. [15] derived approximate correlation functions for longitudinal sequences of general data type, Gaussian and non-Gaussian, based on generalized linear mixed-effects models (GLMM). Their focus was on binary sequences, as well as on a combination of binary and Gaussian sequences. Here, we focus on the specific case of repeated count data, important in two respects. First, we employ the model proposed by Molenberghs et al. [13], which generalizes at the same time the Poisson-normal GLMM and the conventional overdispersion models, in particular the negative-binomial model. The model flexibly accommodates data hierarchies, intra-sequence correlation, and overdispersion. Second, means, variances, and joint probabilities can be expressed in closed form, allowing for exact intra-sequence correlation expressions. Next to the general situation, some important special cases such as exchangeable clustered outcomes are considered, producing insightful expressions. The closed-form expressions are contrasted with the generic approximate expressions of Vangeneugden et al. [15]. Data from an epileptic-seizures trial are analyzed and correlation functions derived. It is shown that the proposed extension strongly outperforms the classical GLMM. 相似文献
8.
James R. Schott 《统计学通讯:理论与方法》2017,46(12):6112-6118
The allometric extension model is a multivariate regression model recently proposed by Tarpey and Ivey (2006). This model holds when the matrix of covariances between the variables in the response vector y and the variables in the vector of regressors x has a particular structure. In this paper, we consider tests of hypotheses for this structure when (y′, x′)′ has a multivariate normal distribution. In particular, we investigate the likelihood ratio test and a Wald test. 相似文献
9.
This article suggests random and fixed effects spatial two-stage least squares estimators for the generalized mixed regressive spatial autoregressive panel data model. This extends the generalized spatial panel model of Baltagi et al. (2013) by the inclusion of a spatial lag term. The estimation method utilizes the Generalized Moments method suggested by Kapoor et al. (2007) for a spatial autoregressive panel data model. We derive the asymptotic distributions of these estimators and suggest a Hausman test a la Mutl and Pfaffermayr (2011) based on the difference between these estimators. Monte Carlo experiments are performed to investigate the performance of these estimators as well as the corresponding Hausman test. 相似文献
10.
Shesh N. Rai Jianmin Pan Xiaobin Yuan Jianguo Sun Melissa M. Hudson Deo K. Srivastava 《统计学通讯:理论与方法》2013,42(17):3117-3133
New drug discovery in the pediatrics has dramatically improved survival, but with long- term adverse events. This motivates the examination of adverse outcomes such as long-term toxicity in a phase IV trial. An ideal approach to monitor long-term toxicity is to systematically follow the survivors, which is generally not feasible. Instead, cross-sectional surveys are conducted in Hudson et al. (2007), with one of the objectives to estimate the cumulative incidence rates along with specific interest in fixed-term (5 or 10 year) rates. We present inference procedures based on current status data to our motivating example with very interesting findings. 相似文献
11.
A Bottom-Up Dynamic Model of Portfolio Credit Risk with Stochastic Intensities and Random Recoveries
Tomasz R. Bielecki Areski Cousin Stéphane Crépey Alexander Herbertsson 《统计学通讯:理论与方法》2014,43(7):1362-1389
In Bielecki et al. (2014a), the authors introduced a Markov copula model of portfolio credit risk where pricing and hedging can be done in a sound theoretical and practical way. Further theoretical backgrounds and practical details are developed in Bielecki et al. (2014b,c) where numerical illustrations assumed deterministic intensities and constant recoveries. In the present paper, we show how to incorporate stochastic default intensities and random recoveries in the bottom-up modeling framework of Bielecki et al. (2014a) while preserving numerical tractability. These two features are of primary importance for applications like CVA computations on credit derivatives (Assefa et al., 2011; Bielecki et al., 2012), as CVA is sensitive to the stochastic nature of credit spreads and random recoveries allow to achieve satisfactory calibration even for “badly behaved” data sets. This article is thus a complement to Bielecki et al. (2014a), Bielecki et al. (2014b) and Bielecki et al. (2014c). 相似文献
12.
The authors derive the analytic expressions for the mean and variance of the log-likelihood ratio for testing equality of k (k ≥ 2) normal populations, and suggest a chi-square approximation and a gamma approximation to the exact null distribution. Numerical comparisons show that the two approximations and the original beta approximation of Neyman and Pearson (1931) are all accurate, and the gamma approximation is the most accurate. 相似文献
13.
Abouzar Bazyari 《统计学通讯:模拟与计算》2017,46(9):7194-7209
Testing homogeneity of multivariate normal mean vectors under an order restriction when the covariance matrices are unknown, arbitrary positive definite and unequal are considered. This problem of testing has been studied to some extent, for example, by Kulatunga and Sasabuchi (1984) when the covariance matrices are known and also Sasabuchi et al. (2003) and Sasabuchi (2007) when the covariance matrices are unknown but common. In this paper, a test statistic is proposed and because of the main advantage of the bootstrap test is that it avoids the derivation of the complex null distribution analytically, a bootstrap test statistic is derived and since the proposed test statistic is location invariance the bootstrap p-value defined logical and some steps are presented to estimate it. Our numerical studies via Monte Carlo simulation show that the proposed bootstrap test can correctly control the type I error rates. The power of the test for some of the p-dimensional normal distributions is computed by Monte Carlo simulation. Also, the null distribution of test statistic is estimated using kernel density. Finally, the bootstrap test is illustrated using a real data. 相似文献
14.
The article investigates diagnostic procedures for finite mixture models. The problem is to decide whether given data stem from an exponential distribution or a finite mixture of such distributions. Recently, three new test approaches have been proposed, the modified likelihood ratio test (MLRT) by Chen et al. (2001), the ADDS test by Mosler and Seidel (2001), and the D-test by Charnigo and Sun (2004). The size and power of these tests are determined by Monte Carlo simulation and their relative merits are evaluated. We conclude that the ADDS test shows always not much less and under some alternatives, in particular lower contaminations, considerably more power than its competitors. Also, new tables for the ADDS test are provided. 相似文献
15.
Tony Vangeneugden Geert Molenberghs Geert Verbeke Clarice G.B. Demétrio 《统计学通讯:理论与方法》2014,43(19):4164-4178
In hierarchical data settings, be it of a longitudinal, spatial, multi-level, clustered, or otherwise repeated nature, often the association between repeated measurements attracts at least part of the scientific interest. Quantifying the association frequently takes the form of a correlation function, including but not limited to intraclass correlation. Vangeneugden et al. (2010) derived approximate correlation functions for longitudinal sequences of general data type, Gaussian and non-Gaussian, based on generalized linear mixed-effects models. Here, we consider the extended model family proposed by Molenberghs et al. (2010). This family flexibly accommodates data hierarchies, intra-sequence correlation, and overdispersion. The family allows for closed-form means, variance functions, and correlation function, for a variety of outcome types and link functions. Unfortunately, for binary data with logit link, closed forms cannot be obtained. This is in contrast with the probit link, for which such closed forms can be derived. It is therefore that we concentrate on the probit case. It is of interest, not only in its own right, but also as an instrument to approximate the logit case, thanks to the well-known probit-logit ‘conversion.’ Next to the general situation, some important special cases such as exchangeable clustered outcomes receive attention because they produce insightful expressions. The closed-form expressions are contrasted with the generic approximate expressions of Vangeneugden et al. (2010) and with approximations derived for the so-called logistic-beta-normal combined model. A simulation study explores performance of the method proposed. Data from a schizophrenia trial are analyzed and correlation functions derived. 相似文献
16.
Guangyu Mao 《Econometric Reviews》2018,37(5):491-506
This article is concerned with sphericity test for the two-way error components panel data model. It is found that the John statistic and the bias-corrected LM statistic recently developed by Baltagi et al. (2011)Baltagi et al. (2012, which are based on the within residuals, are not helpful under the present circumstances even though they are in the one-way fixed effects model. However, we prove that when the within residuals are properly transformed, the resulting residuals can serve to construct useful statistics that are similar to those of Baltagi et al. (2011)Baltagi et al. (2012). Simulation results show that the newly proposed statistics perform well under the null hypothesis and several typical alternatives. 相似文献
17.
Thomas Parker 《统计学通讯:理论与方法》2017,46(11):5195-5202
In this note, it is shown that the finite-sample distributions of the Wald, likelihood ratio, and Lagrange multiplier statistics in the classical linear regression model are members of the generalized beta model introduced by McDonald and Xu (1995a). This is useful for examining the properties of these test statistics. For example, this characterization makes it easy to find distribution, quantile, and density functions for each test statistic, makes it clear why Wald tests may overreject the null hypothesis using asymptotic critical values, and formalizes the fact that the Lagrange multiplier statistic follows a distribution with bounded support. 相似文献
18.
Òscar Jordà 《Econometric Reviews》2013,32(2):243-246
ABSTRACT This paper proposes a test for the null hypothesis of periodic stationarity against the alternative hypothesis of periodic integration. We derive the limiting distribution of the test statistic and its characteristic function, which are the same as those of the test developed in Kwiatkowski, Phillips, Schmidt and Shin.[15] We find that some parameters, which we must assume under the alternative, have an important effect on the limiting power, so we should choose such parameters carefully. A Monte Carlo simulation reveals that the test has reasonable power but may be affected by the lag truncation parameter that is used for the correction of nuisance parameters. 相似文献
19.
In this study, we consider the multiple comparison with a control for multivariate normal means. Specifically, we construct a step-up procedure by referring to Dunnett and Tamhane (1992). We derive recursive formulae for determining the critical values of the step-up procedure for a specified significance level. Then we formulate the power of the test. Finally, we compare the step-up procedure with the single-step procedure proposed by Nakamura and Imada (2005) and the step-down procedure proposed by Imada and Douke (2007) in terms of numerical examples regarding the power of the test. 相似文献
20.
Lindeman et al. [12] provide a unique solution to the relative importance of correlated predictors in multiple regression by averaging squared semi-partial correlations obtained for each predictor across all p! orderings. In this paper, we propose a series of predictor sensitivity statistics that complement the variance decomposition procedure advanced by Lindeman et al. [12]. First, we detail the logic of averaging over orderings as a technique of variance partitioning. Second, we assess predictors by conditional dominance analysis, a qualitative procedure designed to overcome defects in the Lindeman et al. [12] variance decomposition solution. Third, we introduce a suite of indices to assess the sensitivity of a predictor to model specification, advancing a series of sensitivity-adjusted contribution statistics that allow for more definite quantification of predictor relevance. Fourth, we describe the analytic efficiency of our proposed technique against the Budescu conditional dominance solution to the uneven contribution of predictors across all p! orderings. 相似文献