共查询到20条相似文献,搜索用时 31 毫秒
1.
Pao-Sheng Shen 《统计学通讯:模拟与计算》2013,42(10):2295-2307
Cai and Zeng (2011) proposed an additive mixed effect model to analyze clustered right-censored data. In this article, we demonstrate that the approach of Cai and Zeng (2011) can be extended to clustered doubly censored data. Furthermore, when both left- and right-censoring variables are always observed, we propose alternative estimators using the approach of Cai and Cheng (2004). A simulation study is conducted to investigate the performance of the proposed estimators. 相似文献
2.
Pao-Sheng Shen 《统计学通讯:理论与方法》2013,42(22):4096-4106
In this article, we consider the estimation of distribution function for one modified form of current status data. An inverse-probability-weighted (IPW) estimator and a self-consistent estimator (SCE) are proposed. The asymptotic properties of the IPW estimator are derived. A simulation study is conducted to compare the performances among the IPW estimator, SCE, and the product-limit estimator proposed by Patilea and Rolin (2006). Simulation results indicate that when right censoring is light and left censoring is heavy, both IPW estimator and SCE can outperform the product-limit estimator. The performances of the IPW estimator and SCE are close to each other. 相似文献
3.
Joseph V. Terza 《Econometric Reviews》2013,32(6):555-580
Based on the insightful work of Olsen (1980) for the linear context, a generic and unifying framework is developed that affords a simple extension of the classical method of Heckman (1974, 1976, 1978, 1979) to a broad class of nonlinear regression models involving endogenous switching and its two most common incarnations, endogenous sample selection and endogenous treatment effects. The approach should be appealing to applied researchers for three reasons. First, econometric applications involving endogenous switching abound. Secondly, the approach requires neither linearity of the regression function nor full parametric specification of the model. It can, in fact, be applied under the minimal parametric assumptions—i.e., specification of only the conditional means of the outcome and switching variables. Finally, it is amenable to relatively straightforward estimation methods. Examples of applications of the method are discussed. 相似文献
4.
ABSTRACT This paper reviews and extends the literature on the finite sample behavior of tests for sample selection bias. Monte Carlo results show that, when the “multicollinearity problem” identified by Nawata (1993) is severe, (i) the t-test based on the Heckman–Greene variance estimator can be unreliable, (ii) the Likelihood Ratio test remains powerful, and (iii) nonnormality can be interpreted as severe sample selection bias by Maximum Likelihood methods, leading to negative Wald statistics. We also confirm previous findings (Leung and Yu, 1996) that the standard regression-based t-test (Heckman, 1979) and the asymptotically efficient Lagrange Multiplier test (Melino, 1982), are robust to nonnormality but have very little power. 相似文献
5.
ABSTRACTThis paper develops tests of the null hypothesis of linearity in the context of autoregressive models with Markov-switching means and variances. These tests are robust to the identification failures that plague conventional likelihood-based inference methods. The approach exploits the moments of normal mixtures implied by the regime-switching process and uses Monte Carlo test techniques to deal with the presence of an autoregressive component in the model specification. The proposed tests have very respectable power in comparison with the optimal tests for Markov-switching parameters of Carrasco et al. (2014), and they are also quite attractive owing to their computational simplicity. The new tests are illustrated with an empirical application to an autoregressive model of USA output growth. 相似文献
6.
Block and Savits (1980) established a characterization of life distributions using the Laplace transform. In this article, we remark that one of the necessary conditions to be IFRA distribution is equivalent to the star ordering of exponential mixtures. It leads to the definition of two new classes of life distributions, called LIFR and LIFRA, and their dual classes: LDFR and LDFRA. It occurs that these classes have many useful aging properties and preserve known reliability operations. Properties of the classes are studied and relations with known classes are established. 相似文献
7.
Feng-Shou Ko 《统计学通讯:理论与方法》2013,42(15):2681-2698
A proposed method based on frailty models is used to identify longitudinal biomarkers or surrogates for a multivariate survival. This method is an extention of earlier models by Wulfsohn and Tsiatis (1997) and Song et al. (2002). In this article, similar to Henderson et al. (2002), a joint likelihood function combines the likelihood functions of the longitudinal biomarkers and the multivariate survival times. We use simulations to explore how the number of individuals, the number of time points per individual and the functional form of the random effects from the longitudianl biomarkers influence the power to detect the association of a longitudinal biomarker and the multivariate survival time. The proposed method is illustrate by using the gastric cancer data. 相似文献
8.
In this article, the problems of testing homogeneity of several exponential location parameters against simple and tree ordered alternatives are considered separately. Test procedures for both the alternatives are proposed using restricted maximum likelihood estimators (RMLE) of exponential location parameters under the respective orderings. Critical constants for the implementation of the proposed procedures are tabulated. Power comparison of the proposed test procedure under the simple ordered alternative with the procedure of Chen (1982) and of Dhawan and Gill (1999) is carried out using Monte-Carlo simulation. 相似文献
9.
Hu Yang 《统计学通讯:理论与方法》2013,42(20):3204-3215
Liu (2003) proposed the Liu-Type estimator (LTE) to combat the well-known multicollinearity problem in linear regression. In this article, various better fitting characteristics of the LTE than those of the ordinary ridge regression estimator (Hoerl and Kennard, 1970) are considered. In particular, we derived two methods to determine the parameter d for the LTE and find that the ridge parameter k could serve for regularization of an ill-conditioned design matrix, while the other parameter d could be used for tuning the fit quality. In addition, the coefficients of regression, coefficient of multiple determination, residual error variance, and generalized cross validation (GCV) of the prediction quality are very stable, and as the ridge parameter increases they eventually reach asymptotic levels, which produces robust regression models. Furthermore, a Monte Carlo evaluation of these features is also given to illustrate some of the theoretical results. 相似文献
10.
Jenny Häggström 《统计学通讯:模拟与计算》2013,42(5):880-898
We study the validation of prediction rules such as regression models and classification algorithms through two out-of-sample strategies, cross-validation and accumulated prediction error. We use the framework of Efron (1983) where measures of prediction errors are defined as sample averages of expected errors and show through exact finite sample calculations that cross-validation and accumulated prediction error yield different smoothing parameter choices in nonparametric regression. The difference in choice does not vanish as sample size increases. 相似文献
11.
Scott A. Roths 《统计学通讯:理论与方法》2013,42(9):1593-1609
Confidence interval construction for the difference of two independent binomial proportions is a well-known problem with a full panoply of proposed solutions. In this paper, we focus largely on the family of intervals proposed by Beal (1987). This family, which includes the Haldane and Jeffreys–Perks intervals as special cases, assumes a symmetric prior distribution for the population proportions p 1 and p 2. We propose new methods that allow the currently observed data to set the prior distribution by taking a parametric empirical-Bayes approach; in addition, we also provide an investigation of the new interval' behaviors in small-sample situations. Unlike other solutions, our intervals can be used adaptively for experiments conducted in multiple stages over time. We illustrate this notion using data from an Argentinean study involving the Mal Rio Cuarto virus and its transmission to susceptible maize crops. 相似文献
12.
In this paper, we investigate the effect of pre-smoothing on model selection. Christóbal et al 6 showed the beneficial effect of pre-smoothing on estimating the parameters in a linear regression model. Here, in a regression setting, we show that smoothing the response data prior to model selection by Akaike's information criterion can lead to an improved selection procedure. The bootstrap is used to control the magnitude of the random error structure in the smoothed data. The effect of pre-smoothing on model selection is shown in simulations. The method is illustrated in a variety of settings, including the selection of the best fractional polynomial in a generalized linear model. 相似文献
13.
In this study, we consider the multiple comparison with a control for multivariate normal means. Specifically, we construct a step-up procedure by referring to Dunnett and Tamhane (1992). We derive recursive formulae for determining the critical values of the step-up procedure for a specified significance level. Then we formulate the power of the test. Finally, we compare the step-up procedure with the single-step procedure proposed by Nakamura and Imada (2005) and the step-down procedure proposed by Imada and Douke (2007) in terms of numerical examples regarding the power of the test. 相似文献
14.
The proposed test detects deviations from randomness, without a priori distributional assumption, when observations are not independent and identically distributed (i.i.d.), which is suitable for our motivating stock market index data. Departures from i.i.d. are tested by subdividing data into subintervals and then using a conditional probability measure within intervals as a binomial test. This nonparametric test is designed to detect deviations of neighboring observations from randomness when the dataset consists of time series observations. Simulation results and a comparison with Lo and MacKinlay's (1988) variance ratio test showed that our proposed test is a competitive alternative. 相似文献
15.
In practice a degree of uncertainty will always exist concerning what specification to adopt for the deterministic trend function when running unit root tests. While most macroeconomic time series appear to display an underlying trend, it is often far from clear whether this component is best modeled as a simple linear trend (so that long-run growth rates are constant) or by a more complicated nonlinear trend function which may, for instance, allow the deterministic trend component to evolve gradually over time. In this article, we consider the effects on unit root testing of allowing for a local quadratic trend, a simple yet very flexible example of the latter. Where a local quadratic trend is present but not modeled, we show that the quasi-differenced detrended Dickey–Fuller-type test of Elliott et al. (1996) has both size and power which tend to zero asymptotically. An extension of the Elliott et al. (1996) approach to allow for a quadratic trend resolves this problem but is shown to result in large power losses relative to the standard detrended test when no quadratic trend is present. We consequently propose a simple and practical approach to dealing with this form of uncertainty based on a union of rejections-based decision rule whereby the unit root is rejected whenever either of the detrended or quadratic detrended unit root tests rejects. A modification of this basic strategy is also suggested which further improves on the properties of the procedure. An application to relative primary commodity price data highlights the empirical relevance of the methods outlined in this article. A by-product of our analysis is the development of a test for the presence of a quadratic trend which is robust to whether the data admit a unit root. 相似文献
16.
Zheng Su 《统计学通讯:模拟与计算》2013,42(8):1163-1170
Johns (1988), Davison (1988), and Do and Hall (1991) used importance sampling for calculating bootstrap distributions of one-dimensional statistics. Realizing that their methods can not be extended easily to multi-dimensional statistics, Fuh and Hu (2004) proposed an exponential tilting formula for statistics of multi-dimension, which is optimal in the sense that the asymptotic variance is minimized for estimating tail probabilities of asymptotically normal statistics. For one-dimensional statistics, Hu and Su (2008) proposed a multi-step variance minimization approach that can be viewed as a generalization of the two-step variance minimization approach proposed by Do and Hall (1991). In this article, we generalize the approach of Hu and Su (2008) to multi-dimensional statistics, which applies to general statistics and does not resort to asymptotics. Empirical results on a real survival data set show that the proposed algorithm provides significant computational efficiency gains. 相似文献
17.
Vee Ming Ng 《统计学通讯:理论与方法》2013,42(24):4407-4412
Baysian inference is considered for the precision matrix of the multivariate regression model with distribution of the random responses belonging to the multivariate scale mixtures of normal distributions. The posterior distribution and some identities involving expectations taken with respect to this posterior distribution are derived when the prior distribution of the parameters is from the conjugate family. The results are specialized to the case where the random responses have a matrix-t distribution and thus generalizing the results of Zellner (1976) and Muirhead (1986). 相似文献
18.
Gadre and Rattihalli [5] have introduced the Modified Group Runs (MGR) control chart to identify the increases in fraction non-conforming and to detect shifts in the process mean. The MGR chart reduces the out-of-control average time-to-signal (ATS), as compared with most of the well-known control charts. In this article, we develop the Side Sensitive Modified Group Runs (SSMGR) chart to detect shifts in the process mean. With the help of numerical examples, it is illustrated that the SSMGR chart performs better than the Shewhart's X¯ chart, the synthetic chart [12], the Group Runs chart [4], the Side Sensitive Group Runs chart [6], as well as the MGR chart [5]. In some situations it is also superior to the Cumulative Sum chart p9] and the exponentially weighed moving average chart [10]. In the steady state also, its performance is better than the above charts. 相似文献
19.
《统计学通讯:理论与方法》2013,42(4):753-766
ABSTRACT A confidence interval and test are obtained for the mean of an asymmetric distribution using a random sample of size n. The method is based on N. J. Johnson's (1978) modified t-test, where terms of Cornish–Fisher expansions involving the third moment are used to adjust the conventional statistic to have more closely a Student's t-distribution with n ? 1 degrees of freedom. Johnson's (1978) test cannot be inverted uniquely, so a corresponding confidence interval for the mean may be disjointed. However, an artificial term of small order can be added to make inversion of the test a uniquely defined operation, which prevents such disjointedness. The resulting one-sided and two-sided intervals perform better than others in the literature with skewed distributions, and have good performance with a normal distribution. The two-sided interval may be recommended for general use if the sample size is 10 or more and the nominal confidence coefficient is 95% or less, or if the sample size is 30 or more and the confidence coefficient is 99% or less. 相似文献
20.
The prediction of the one-step-ahead observation of the first-order autoregressive process in the presence of outliers is considered. The mean square of the prediction error is obtained based on the median estimator of the model parameter for a stationary process. Monte Carlo simulation methods are employed to investigate the performance of the proposed estimator as well as the conventional ordinary least squares estimators proposed by Zhang and Shaman (1995) and Kabaila and He (1999) for a process without outliers. The results show that the proposed method outperforms the conventional method. These conclusions are substantiated with results from actual datasets. 相似文献