共查询到20条相似文献,搜索用时 531 毫秒
1.
Hafiz M. R. Khan 《统计学通讯:理论与方法》2013,42(24):4427-4438
The purpose of this article is to investigate the predictive inference for responses from the location parameter mean as well as from the median given a doubly censored sample from the two-parameter Rayleigh model. The predictive results by Khan et al. (2010) are used to obtain the predictive inference for responses from the median, where Khan et al. (2010) obtained the future estimates from the mean. A numerical example representing 66 liver cancer patients is used for predictive analysis. It is concluded that the predictive inference from the median gives precise results as compared with the location parameter mean. 相似文献
2.
《统计学通讯:理论与方法》2013,42(4):749-774
Abstract In this article two methods are proposed to make inferences about the parameters of a finite mixture of distributions in the context of partially identifiable censored data. The first method focuses on a mixture of location and scale models and relies on an asymptotic approximation to a suitably constructed augmented likelihood; the second method provides a full Bayesian analysis of the mixture based on a Gibbs sampler. Both methods make explicit use of latent variables and provide computationally efficient procedures compared to other methods which deal directly with the likelihood of the mixture. This may be crucial if the number of components in the mixture is not small. Our proposals are illustrated on a classical example on failure times for communication devices first studied by Mendenhall and Hader (Mendenhall, W., Hader, R. J. (1958). Estimation of parameters of mixed exponentially distributed failure time distributions from censored life test data. Biometrika 45:504–520.). In addition, we study the coverage of the confidence intervals obtained from each of the methods by means of a small simulation exercise. 相似文献
3.
We propose a Bayesian approach for inference in a dynamic disequilibrium model. To circumvent the difficulties raised by the Maddala and Nelson (1974) specification in the dynamic case, we analyze a dynamic extended version of the disequilibrium model of Ginsburgh et al. (1980). We develop a Gibbs sampler based on the simulation of the missing observations. The feasibility of the approach is illustrated by an empirical analysis of the Polish credit market, for which we conduct a specification search using the posterior deviance criterion of Spiegelhalter et al. (2002). 相似文献
4.
The Significance Analysis of Microarrays (SAM; Tusher et al., 2001) method is widely used in analyzing gene expression data while controlling the FDR by using resampling-based procedure in the microarray setting. One of the main components of the SAM procedure is the adjustment of the test statistic. The introduction of the fudge factor to the test statistic aims at deflating the large value of test statistics due to the small standard error of gene-expression. Lin et al. (2008) pointed out that the fudge factor does not effectively improve the power and the control of the FDR as compared to the SAM procedure without the fudge factor in the presence of small variance genes. Motivated by the simulation results presented in Lin et al. (2008), in this article, we extend our study to compare several methods for choosing the fudge factor in the modified t-type test statistics and use simulation studies to investigate the power and the control of the FDR of the considered methods. 相似文献
5.
Hong Zhang 《统计学通讯:理论与方法》2013,42(7):1228-1241
Sa and Edwards (1993) first proposed the Multiple Comparisons with a Control problem in Response Surface Methodology. They provided an exact solution for one predictor variable and a conservative solution when number of predictor variables is more than one. Merchant et al. (1998) improved the solution for the latter case. This article improves Merchant et al.'s solution for the case of rotatable designs in two predictor variables. 相似文献
6.
Federico O'Reilly 《统计学通讯:理论与方法》2013,42(12):2207-2212
Lindqvist and Taraldsen (2005) introduced an interesting parametric family of distributions in the unit interval. In this note, inference procedures are given, both from the classical and the Bayesian view point. It is shown numerically through various examples that the posterior distribution for the parameter and the induced fiducial distribution are almost equivalent. The parametric family under study is a regular member of the Natural Exponential Family and so use of this fact permits induction of a unique fiducial in terms of the minimal sufficient statistic. 相似文献
7.
Arnold Zellner Tomohiro Ando Nalan Baştürk Herman K. van Dijk 《Econometric Reviews》2014,33(1-4):3-35
We discuss Bayesian inferential procedures within the family of instrumental variables regression models and focus on two issues: existence conditions for posterior moments of the parameters of interest under a flat prior and the potential of Direct Monte Carlo (DMC) approaches for efficient evaluation of such possibly highly non-elliptical posteriors. We show that, for the general case of m endogenous variables under a flat prior, posterior moments of order r exist for the coefficients reflecting the endogenous regressors’ effect on the dependent variable, if the number of instruments is greater than m +r, even though there is an issue of local non-identification that causes non-elliptical shapes of the posterior. This stresses the need for efficient Monte Carlo integration methods. We introduce an extension of DMC that incorporates an acceptance-rejection sampling step within DMC. This Acceptance-Rejection within Direct Monte Carlo (ARDMC) method has the attractive property that the generated random drawings are independent, which greatly helps the fast convergence of simulation results, and which facilitates the evaluation of the numerical accuracy. The speed of ARDMC can be easily further improved by making use of parallelized computation using multiple core machines or computer clusters. We note that ARDMC is an analogue to the well-known “Metropolis-Hastings within Gibbs” sampling in the sense that one ‘more difficult’ step is used within an ‘easier’ simulation method. We compare the ARDMC approach with the Gibbs sampler using simulated data and two empirical data sets, involving the settler mortality instrument of Acemoglu et al. (2001) and father's education's instrument used by Hoogerheide et al. (2012a). Even without making use of parallelized computation, an efficiency gain is observed both under strong and weak instruments, where the gain can be enormous in the latter case. 相似文献
8.
Pao-Sheng Shen 《统计学通讯:模拟与计算》2013,42(10):2295-2307
Cai and Zeng (2011) proposed an additive mixed effect model to analyze clustered right-censored data. In this article, we demonstrate that the approach of Cai and Zeng (2011) can be extended to clustered doubly censored data. Furthermore, when both left- and right-censoring variables are always observed, we propose alternative estimators using the approach of Cai and Cheng (2004). A simulation study is conducted to investigate the performance of the proposed estimators. 相似文献
9.
Soo Hak Sung 《统计学通讯:理论与方法》2013,42(9):1663-1674
A complete convergence theorem for an array of rowwise independent random variables was established by Sung et al. (2005). This result has been generalized and extended by Kruglov et al. (2006) and Chen et al. (2007). In this article, we extend the results of Sung et al. (2005), Kruglov et al. (2006), and Chen et al. (2007) to an array of dependent random variables satisfying Hoffmann-Jørgensen type inequalities. 相似文献
10.
R. Hasan Abadi 《统计学通讯:模拟与计算》2013,42(8):1430-1443
Censored data arise naturally in a number of fields, particularly in problems of reliability and survival analysis. There are several types of censoring, in this article, we will confine ourselves to the right randomly censoring type. Recently, Ahmadi et al. (2010) considered the problem of estimating unknown parameters in a general framework based on the right randomly censored data. They assumed that the survival function of the censoring time is free of the unknown parameter. This assumption is sometimes inappropriate. In such cases, a proportional odds (PO) model may be more appropriate (Lam and Leung, 2001). Under this model, in this article, point and interval estimations for the unknown parameters are obtained. Since it is important to check the adequacy of models upon which inferences are based (Lawless, 2003, p. 465), two new goodness-of-fit tests for PO model based on right randomly censored data are proposed. The proposed procedures are applied to two real data sets due to Smith (2002). A Monte Carlo simulation study is conducted to carry out the behavior of the estimators obtained. 相似文献
11.
《Journal of Statistical Computation and Simulation》2012,82(11):1679-1699
Item response theory (IRT) comprises a set of statistical models which are useful in many fields, especially when there is an interest in studying latent variables (or latent traits). Usually such latent traits are assumed to be random variables and a convenient distribution is assigned to them. A very common choice for such a distribution has been the standard normal. Recently, Azevedo et al. [Bayesian inference for a skew-normal IRT model under the centred parameterization, Comput. Stat. Data Anal. 55 (2011), pp. 353–365] proposed a skew-normal distribution under the centred parameterization (SNCP) as had been studied in [R.B. Arellano-Valle and A. Azzalini, The centred parametrization for the multivariate skew-normal distribution, J. Multivariate Anal. 99(7) (2008), pp. 1362–1382], to model the latent trait distribution. This approach allows one to represent any asymmetric behaviour concerning the latent trait distribution. Also, they developed a Metropolis–Hastings within the Gibbs sampling (MHWGS) algorithm based on the density of the SNCP. They showed that the algorithm recovers all parameters properly. Their results indicated that, in the presence of asymmetry, the proposed model and the estimation algorithm perform better than the usual model and estimation methods. Our main goal in this paper is to propose another type of MHWGS algorithm based on a stochastic representation (hierarchical structure) of the SNCP studied in [N. Henze, A probabilistic representation of the skew-normal distribution, Scand. J. Statist. 13 (1986), pp. 271–275]. Our algorithm has only one Metropolis–Hastings step, in opposition to the algorithm developed by Azevedo et al., which has two such steps. This not only makes the implementation easier but also reduces the number of proposal densities to be used, which can be a problem in the implementation of MHWGS algorithms, as can be seen in [R.J. Patz and B.W. Junker, A straightforward approach to Markov Chain Monte Carlo methods for item response models, J. Educ. Behav. Stat. 24(2) (1999), pp. 146–178; R.J. Patz and B.W. Junker, The applications and extensions of MCMC in IRT: Multiple item types, missing data, and rated responses, J. Educ. Behav. Stat. 24(4) (1999), pp. 342–366; A. Gelman, G.O. Roberts, and W.R. Gilks, Efficient Metropolis jumping rules, Bayesian Stat. 5 (1996), pp. 599–607]. Moreover, we consider a modified beta prior (which generalizes the one considered in [3]) and a Jeffreys prior for the asymmetry parameter. Furthermore, we study the sensitivity of such priors as well as the use of different kernel densities for this parameter. Finally, we assess the impact of the number of examinees, number of items and the asymmetry level on the parameter recovery. Results of the simulation study indicated that our approach performed equally as well as that in [3], in terms of parameter recovery, mainly using the Jeffreys prior. Also, they indicated that the asymmetry level has the highest impact on parameter recovery, even though it is relatively small. A real data analysis is considered jointly with the development of model fitting assessment tools. The results are compared with the ones obtained by Azevedo et al. The results indicate that using the hierarchical approach allows us to implement MCMC algorithms more easily, it facilitates diagnosis of the convergence and also it can be very useful to fit more complex skew IRT models. 相似文献
12.
Several methods have been developed for testing the ordered alternative. These include the Jonckheere–Terpstra (JT) test (Jonckheere, 1954; Terpstra, 1952), a modified JT test (MJT) (Tryon and Hettmansperger, 1987), and a test proposed by Terpstra and Magel (TM) (Terpstra and Magel, 2003), among others. This article proposes a new method for testing the ordered alternative. The proposed test is based on Kendall's tau statistic. The asymptotic distribution of the test statistic is given. A Monte Carlo simulation study is conducted comparing the estimated powers of the proposed test with existing tests under a variety of sample sizes and distributions. 相似文献
13.
Feng-Shou Ko 《统计学通讯:理论与方法》2013,42(15):2681-2698
A proposed method based on frailty models is used to identify longitudinal biomarkers or surrogates for a multivariate survival. This method is an extention of earlier models by Wulfsohn and Tsiatis (1997) and Song et al. (2002). In this article, similar to Henderson et al. (2002), a joint likelihood function combines the likelihood functions of the longitudinal biomarkers and the multivariate survival times. We use simulations to explore how the number of individuals, the number of time points per individual and the functional form of the random effects from the longitudianl biomarkers influence the power to detect the association of a longitudinal biomarker and the multivariate survival time. The proposed method is illustrate by using the gastric cancer data. 相似文献
14.
Pao-Sheng Shen 《统计学通讯:模拟与计算》2013,42(3):603-612
In this article, we consider the M-estimators for the linear regression model when both response and covariate variables are subject to double censoring. The proposed estimators are constructed as some functional of three types of estimators for a bivariate survival distribution. The first two estimators are the generalizations of the Campbell and Földes (1982) and Dabrowska (1988) estimators proposed by Shen (2009). The third estimator is the generalization of the Prentice and Cai (1992) estimator. The consistency of the proposed M-estimators is established. A simulation study is conducted to investigate the performance of the proposed estimators. Furthermore, the simple bootstrap methods are used to estimate standard deviations and construct interval estimators. 相似文献
15.
Pao-Sheng Shen 《Journal of applied statistics》2011,38(4):675-682
Double censoring arises when T represents an outcome variable that can only be accurately measured within a certain range, [L, U], where L and U are the left- and right-censoring variables, respectively. In this note, using Martingale arguments of Chen et al. [3], we propose an estimator (denoted by ?β) for estimating regression coefficients of transformation model when L is always observed. Under Cox proportional hazards model, the proposed estimator is equivalent to the partial likelihood estimator for left-truncated and right-censored data if the left-censoring variables L were regarded as left-truncated variables. In this case, the estimator ?β can be obtained by the standard software. A simulation study is conducted to investigate the performance of ?β. For the purpose of comparison, the simulation study also includes the estimator proposed by Cai and Cheng [2] for the case when L and U are always observed. 相似文献
16.
We consider non-parametric estimation of a continuous cdf of a random vector (X 1, X 2). With bivariate RC data, it is stated in van der Laan (1996, p. 59810, Ann. Statist.), Quale et al. (2006, JASA) etc. that “it is well known that the NPMLE for continuous data is inconsistent (Tsai et al. (1986)).” The claim is based on a result in Tsai et al. (1986, p.1352, Ann. Statist.) that if X 1 is right censored but not X 2, then common ways for defining one NPMLE lead to inconsistency. If X 1 is right censored and X 2 is type I right-censored (which includes the case in Tsai et al.), we present a consistent NPMLE. The result corrects a common misinterpretation of Tsai's example (Tsai et al., 1986, Ann. Statist.). 相似文献
17.
Shesh N. Rai Jianmin Pan Xiaobin Yuan Jianguo Sun Melissa M. Hudson Deo K. Srivastava 《统计学通讯:理论与方法》2013,42(17):3117-3133
New drug discovery in the pediatrics has dramatically improved survival, but with long- term adverse events. This motivates the examination of adverse outcomes such as long-term toxicity in a phase IV trial. An ideal approach to monitor long-term toxicity is to systematically follow the survivors, which is generally not feasible. Instead, cross-sectional surveys are conducted in Hudson et al. (2007), with one of the objectives to estimate the cumulative incidence rates along with specific interest in fixed-term (5 or 10 year) rates. We present inference procedures based on current status data to our motivating example with very interesting findings. 相似文献
18.
Here, we apply the smoothing technique proposed by Chaubey et al. (2007) for the empirical survival function studied in Bagai and Prakasa Rao (1991) for a sequence of stationary non-negative associated random variables.The derivative of this estimator in turn is used to propose a nonparametric density estimator. The asymptotic properties of the resulting estimators are studied and contrasted with some other competing estimators. A simulation study is carried out comparing the recent estimator based on the Poisson weights (Chaubey et al., 2011) showing that the two estimators have comparable finite sample global as well as local behavior. 相似文献
19.
Boardman and Kendell (1970) considered the problem of estimation with respect to Type-I censoring when an item is subjected to only one of the two causes of failure assuming exponential model. Patel and Gajjar (1992) considered extension of the Boardman and Kendell's results in case of two-stage progressive censoring. Here we have considered geometric competing risk failure model with two independent causes of failures. Maximum likelihood estimation of the parameters is carried out using Type-I two-stage progressively censored and group censored samples. Asymptotic standard errors of the estimators are obtained for both the cases. Two illustrative examples are cited for ungroup and group competing risk models. 相似文献
20.
N. Balakrishnan 《统计学通讯:理论与方法》2013,42(5):880-906
In this article, we establish several recurrence relations for the single and product moments of progressively Type-II right censored order statistics from a log-logistic distribution. The use of these relations in a systematic recursive manner would enable the computation of all the means, variances and covariances of progressively Type-II right censored order statistics from the log-logistic distribution for all sample sizes n, effective sample sizes m, and all progressive censoring schemes (R 1,…, R m ). The results established here generalize the corresponding results for the usual order statistics due to Balakrishnan and Malik (1987) and Balakrishnan et al. (1987). The moments so determined are then utilized to derive best linear unbiased estimators for the scale- and location-scale log-logistic distributions. A comparison of these estimates with the maximum likelihood estimates is made through Monte Carlo simulation. The best linear unbiased predictors of progressively censored failure times is then discussed briefly. Finally, a numerical example is presented to illustrate all the methods of inference developed here. 相似文献