共查询到20条相似文献,搜索用时 31 毫秒
1.
《统计学通讯:理论与方法》2012,41(1):78-87
AbstractIn this article, we revisit the problem of fitting a mixture model under the assumption that the mixture components are symmetric and log-concave. To this end, we first study the nonparametric maximum likelihood estimation (MLE) of a monotone log-concave probability density. To fit the mixture model, we propose a semiparametric EM (SEM) algorithm, which can be adapted to other semiparametric mixture models. In our numerical experiments, we compare our algorithm to that of Balabdaoui and Doss (2018, Inference for a two-component mixture of symmetric distributions under log-concavity. Bernoulli 24 (2):1053–71) and other mixture models both on simulated and real-world datasets. 相似文献
2.
Hafiz M. R. Khan 《统计学通讯:理论与方法》2013,42(24):4427-4438
The purpose of this article is to investigate the predictive inference for responses from the location parameter mean as well as from the median given a doubly censored sample from the two-parameter Rayleigh model. The predictive results by Khan et al. (2010) are used to obtain the predictive inference for responses from the median, where Khan et al. (2010) obtained the future estimates from the mean. A numerical example representing 66 liver cancer patients is used for predictive analysis. It is concluded that the predictive inference from the median gives precise results as compared with the location parameter mean. 相似文献
3.
Robert M. Adams 《统计学通讯:理论与方法》2013,42(13):2425-2442
This article generalizes results from Park et al. (1998) and Adams et al. (1999) on semiparametric efficient estimation of panel models. The form of semiparametric efficient estimators depends on the statistical assumptions imposed. Normality assumptions on the transitory error are sometimes inappropriate. We relax the normality assumption used in the articles above to derive more general semiparametric efficient estimators. These estimators are illustrated in a Monte Carlo simulation and an analysis of banking productivity. 相似文献
4.
Extending the bifurcating autoregressive (BAR) process (cf. Cowan and Staudte, 1986) to multi-casting (multi-splitting) data, Hwang and Choi (2009) introduced multi-casting autoregression (MCAR, for short) defined on multi-casting tree structured data. This article is concerned with the case when the MCAR model is partially specified only through conditional mean and variance without directly imposing autoregressive (AR) structure. The resulting class of models will be referred to as P-MCAR (partially specified MCAR). The P-MCAR considerably enlarges the class of multi-casting models including (as special cases) MCAR, random coefficient MCAR, conditionally heteroscedastic multi-casting models and binomial-thinning processes. Moment structures for this broad P-MCAR class are investigated. Least squares (LS) estimation method is discussed and asymptotic relative efficiency (ARE) of the generalized-LS over ordinary-LS is obtained in a closed form. A simulation study is conducted to illustrate results. 相似文献
5.
Pao-sheng Shen 《统计学通讯:模拟与计算》2013,42(4):531-543
Double censoring arises when T represents an outcome variable that can only be accurately measured within a certain range, [L, U], where L and U are the left- and right-censoring variables, respectively. When L is always observed, we consider the empirical likelihood inference for linear transformation models, based on the martingale-type estimating equation proposed by Chen et al. (2002). It is demonstrated that both the approach of Lu and Liang (2006) and that of Yu et al. (2011) can be extended to doubly censored data. Simulation studies are conducted to investigate the performance of the empirical likelihood ratio methods. 相似文献
6.
We evaluate the finite-sample behavior of different heteros-ke-das-ticity-consistent covariance matrix estimators, under both constant and unequal error variances. We consider the estimator proposed by Halbert White (HC0), and also its variants known as HC2, HC3, and HC4; the latter was recently proposed by Cribari-Neto (2004). We propose a new covariance matrix estimator: HC5. It is the first consistent estimator to explicitly take into account the effect that the maximal leverage has on the associated inference. Our numerical results show that quasi-t inference based on HC5 is typically more reliable than inference based on other covariance matrix estimators. 相似文献
7.
Feng-Shou Ko 《统计学通讯:理论与方法》2013,42(18):3222-3237
We introduce a score test to identify longitudinal biomarkers or surrogates for a time to event outcome. This method is an extension of Henderson et al. (2000, 2002). In this article, a score test is based on a joint likelihood function which combines the likelihood functions of the longitudinal biomarkers and the survival times. Henderson et al. (2000, 2002) assumed that the same random effect exists in the longitudinal component and in the Cox model and then they can derive a score test to determine if a longitudinal biomarker is associated with time to an event. We extend this work and our score test is based on a joint likelihood function which allows other random effects to be present in the survival function. Considering heterogeneous baseline hazards in individuals, we use simulations to explore how the factors can influence the power of a score test to detect the association of a longitudinal biomarker and the survival time. These factors include the functional form of the random effects from the longitudinal biomarkers, in the different number of individuals, and time points per individual. We illustrate our method using a prothrombin index as a predictor of survival in liver cirrhosis patients. 相似文献
8.
Huang (2010) proposed an optional randomized response model using a linear combination scrambling which is a generalization of the multiplicative scrambling of Eichhorn and Hayre (1983) and the additive scrambling of Gupta et al. (2006, 2010). In this article, we discuss two main issues. (1) Can the Huang (2010) model be improved further by using a two-stage approach?; (2) Does the linear combination scrambling provide any benefit over the additive scrambling of Gupta et al. (2010)? We will note that the answer to the first question is “yes” but the answer to the second question is “no.” 相似文献
9.
Feng-Shou Ko 《统计学通讯:理论与方法》2013,42(15):2681-2698
A proposed method based on frailty models is used to identify longitudinal biomarkers or surrogates for a multivariate survival. This method is an extention of earlier models by Wulfsohn and Tsiatis (1997) and Song et al. (2002). In this article, similar to Henderson et al. (2002), a joint likelihood function combines the likelihood functions of the longitudinal biomarkers and the multivariate survival times. We use simulations to explore how the number of individuals, the number of time points per individual and the functional form of the random effects from the longitudianl biomarkers influence the power to detect the association of a longitudinal biomarker and the multivariate survival time. The proposed method is illustrate by using the gastric cancer data. 相似文献
10.
For the first time, we provide a matrix formula for second-order covariances of maximum likelihood estimates in heteroskedastic generalized linear models, thus generalizing the results of Cordeiro (2004) and Cordeiro et al. (2006) related to the generalized linear models with known and unknown dispersion parameter, respectively. The covariance matrix formula does not involve cumulants of log-likelihood derivatives and can be easily obtained using simple matrix operations. We apply our main result to a simple model. Some simulations show that the second-order covariances can be quite pronounced in small to moderate samples. The usual covariances of the maximum likelihood estimates can be corrected by these second-order covariances. 相似文献
11.
In incident cohort studies, survival data often include subjects who have had an initiate event at recruitment and may potentially experience two successive events (first and second) during the follow-up period. Since the second duration process becomes observable only if the first event has occurred, left-truncation and dependent censoring arise if the two duration times are correlated. To confront the two potential sampling biases, Chang and Tzeng (2006) provided an inverse-probability-weighted (IPW) approach for estimating the joint probability function of successive duration times. In this note, an alternative IPW approach is proposed. A simulation study is conducted to compare the two IPW approaches. 相似文献
12.
Rodney W. Strachan 《Econometric Reviews》2013,32(2-4):439-468
This paper generalizes the cointegrating model of Phillips (1991) to allow for I (0), I (1) and I (2) processes. The model has a simple form that permits a wider range of I (2) processes than are usually considered, including a more flexible form of polynomial cointegration. Further, the specification relaxes restrictions identified by Phillips (1991) on the I (1) and I (2) cointegrating vectors and restrictions on how the stochastic trends enter the system. To date there has been little work on Bayesian I (2) analysis and so this paper attempts to address this gap in the literature. A method of Bayesian inference in potentially I (2) processes is presented with application to Australian money demand using a Jeffreys prior and a shrinkage prior. 相似文献
13.
The Significance Analysis of Microarrays (SAM; Tusher et al., 2001) method is widely used in analyzing gene expression data while controlling the FDR by using resampling-based procedure in the microarray setting. One of the main components of the SAM procedure is the adjustment of the test statistic. The introduction of the fudge factor to the test statistic aims at deflating the large value of test statistics due to the small standard error of gene-expression. Lin et al. (2008) pointed out that the fudge factor does not effectively improve the power and the control of the FDR as compared to the SAM procedure without the fudge factor in the presence of small variance genes. Motivated by the simulation results presented in Lin et al. (2008), in this article, we extend our study to compare several methods for choosing the fudge factor in the modified t-type test statistics and use simulation studies to investigate the power and the control of the FDR of the considered methods. 相似文献
14.
Consider a skewed population. Suppose an intelligent guess could be made about an interval that contains the population mean. There may exist biased estimators with smaller mean squared error than the arithmetic mean within such an interval. This article indicates when it is advisable to shrink the arithmetic mean towards a guessed interval using root estimators. The goal is to obtain an estimator that is better near the average of natural origins. An estimator proposed. This estimator contains the Thompson (1968) ordinary shrinkage estimator, the Jenkins et al. (1973) square-root estimator, and the arithmetic sample mean as special cases. The bias and the mean squared error of the proposed more general estimator is compared with the three special cases. Shrinkage coefficients that yield minimum mean squared error estimators are obtained. The proposed estimator is considerably more efficient than the three special cases. This remains true for highly skewed populations. The merits of the proposed shrinkage square-root estimator are supported by the results of numerical and simulation studies. 相似文献
15.
We carried out a simulation study based on the methodology of Newcombe (1998) to compare tests for the difference of two binomial proportions by applying different continuity corrections on saddlepoint approximation to tail probabilities. In this article, we proposed a new continuity correction based on the least common multiple of two sample sizes. We evaluated that the best test should have the actual Type I error rates that are, on the whole, closest to α, but not exceeding α, where α is nominal level of significance. 相似文献
16.
Nonlinear heteroscedastic models are widely used in econometrics and statistical applications. We derive matrix formulae for the second-order biases of the maximum likelihood estimators of the parameters in the mean and variance response which generalize previous results by Cook et al. (1986) and Cordeiro (1993). The biases of the estimators are easily obtained as vectors of regression coefficients from suitable weighted linear regressions. The practical use of such biases is illustrated in a simulation study and in an application to a real data set. 相似文献
17.
ABSTRACT This paper reviews and extends the literature on the finite sample behavior of tests for sample selection bias. Monte Carlo results show that, when the “multicollinearity problem” identified by Nawata (1993) is severe, (i) the t-test based on the Heckman–Greene variance estimator can be unreliable, (ii) the Likelihood Ratio test remains powerful, and (iii) nonnormality can be interpreted as severe sample selection bias by Maximum Likelihood methods, leading to negative Wald statistics. We also confirm previous findings (Leung and Yu, 1996) that the standard regression-based t-test (Heckman, 1979) and the asymptotically efficient Lagrange Multiplier test (Melino, 1982), are robust to nonnormality but have very little power. 相似文献
18.
Thatphong Awirothananon 《统计学通讯:模拟与计算》2013,42(8):1757-1788
In this article, we examine the performance of two newly developed procedures that jointly select the number of states and variables in Markov-switching models by means of Monte Carlo simulations. They are Smith et al. (2006) and Psaradakis and Spagnolo (2006), respectively. The former develops Markov switching criterion (MSC) designed specifically for Markov-switching models, while the latter recommends the use of standard complexity-penalised information criteria (BIC, HQC, and AIC) in joint determination of the state dimension and the autoregressive order of Markov-switching models. The Monte Carlo evidence shows that BIC outperforms MSC while MSC and HQC are preferable over AIC. 相似文献
19.
Pao-Sheng Shen 《统计学通讯:模拟与计算》2013,42(3):603-612
In this article, we consider the M-estimators for the linear regression model when both response and covariate variables are subject to double censoring. The proposed estimators are constructed as some functional of three types of estimators for a bivariate survival distribution. The first two estimators are the generalizations of the Campbell and Földes (1982) and Dabrowska (1988) estimators proposed by Shen (2009). The third estimator is the generalization of the Prentice and Cai (1992) estimator. The consistency of the proposed M-estimators is established. A simulation study is conducted to investigate the performance of the proposed estimators. Furthermore, the simple bootstrap methods are used to estimate standard deviations and construct interval estimators. 相似文献
20.
We consider a new generalization of the skew-normal distribution introduced by Azzalini (1985). We denote this distribution Beta skew-normal (BSN) since it is a special case of the Beta generated distribution (Jones, 2004). Some properties of the BSN are studied. We pay attention to some generalizations of the skew-normal distribution (Bahrami et al., 2009; Sharafi and Behboodian, 2008; Yadegari et al., 2008) and to their relations with the BSN. 相似文献