共查询到20条相似文献,搜索用时 78 毫秒
1.
Feng-Shou Ko 《统计学通讯:理论与方法》2013,42(15):2681-2698
A proposed method based on frailty models is used to identify longitudinal biomarkers or surrogates for a multivariate survival. This method is an extention of earlier models by Wulfsohn and Tsiatis (1997) and Song et al. (2002). In this article, similar to Henderson et al. (2002), a joint likelihood function combines the likelihood functions of the longitudinal biomarkers and the multivariate survival times. We use simulations to explore how the number of individuals, the number of time points per individual and the functional form of the random effects from the longitudianl biomarkers influence the power to detect the association of a longitudinal biomarker and the multivariate survival time. The proposed method is illustrate by using the gastric cancer data. 相似文献
2.
In this article, we introduce shared gamma frailty models with three different baseline distributions namely, Weibull, generalized exponential and exponential power distributions. We develop Bayesian estimation procedure using Markov Chain Monte Carlo(MCMC) technique to estimate the parameters involved in these models. We present a simulation study to compare the true values of the parameters with the estimated values. Also we apply these three models to a real life bivariate survival dataset of McGilchrist and Aisbett (1991) related to kidney infection data and a better model is suggested for the data. 相似文献
3.
We consider the filtering model of Frey and Schmidt (2012) stated under the real probability measure and develop a method for estimating the parameters in this framework by using time-series data of CDS index spreads and classical maximum-likelihood algorithms. The estimation-approach incorporates the Kushner-Stratonovich SDE for the dynamics of the filtering probabilities. The convenient formula for the survival probability is a prerequisite for our estimation algorithm. We apply the developed maximum-likelihood algorithms on market data for historical CDS index spreads (iTraxx Europe Main Series) in order to estimate the parameters in the nonlinear filtering model for an exchangeable credit portfolio. Several such estimations are performed as well as accompanying statistical and numerical computations. 相似文献
4.
The Significance Analysis of Microarrays (SAM; Tusher et al., 2001) method is widely used in analyzing gene expression data while controlling the FDR by using resampling-based procedure in the microarray setting. One of the main components of the SAM procedure is the adjustment of the test statistic. The introduction of the fudge factor to the test statistic aims at deflating the large value of test statistics due to the small standard error of gene-expression. Lin et al. (2008) pointed out that the fudge factor does not effectively improve the power and the control of the FDR as compared to the SAM procedure without the fudge factor in the presence of small variance genes. Motivated by the simulation results presented in Lin et al. (2008), in this article, we extend our study to compare several methods for choosing the fudge factor in the modified t-type test statistics and use simulation studies to investigate the power and the control of the FDR of the considered methods. 相似文献
5.
N. Unnikrishnan Nair 《统计学通讯:理论与方法》2013,42(2):222-232
Quantile functions are equivalent alternatives to distribution functions in modeling and analysis of statistical data. The present article discusses the role of quantile functions in reliability studies. We present the hazard, mean residual, variance residual, and percentile residual quantile functions, their mutual relationships and expressions for the quantile functions in terms of these functions. Further, some theoretical results relating to the Hankin and Lee (2006) lambda distribution are discussed. 相似文献
6.
By applying the recursion of Huffer (1988) repeatedly, we propose an algorithm for evaluating the null joint distribution of Dixon-type test statistics for testing discordancy of k upper outliers in exponential samples. By using the critical values of Dixon-type test statistics determined from the proposed algorithm and those of Cochran-type test statistics presented earlier by Lin and Balakrishnan (2009), we carry out an extensive Monte Carlo study to investigate the powers and the error probabilities for the effects of masking and swamping when the number of outliers k = 2 and 3. Based on our empirical findings, we recommend Rosner’s (1975) sequential test procedure based on Dixon-type test statistics for testing multiple outliers from an exponential distribution. 相似文献
7.
Pao-Sheng Shen 《统计学通讯:模拟与计算》2013,42(3):603-612
In this article, we consider the M-estimators for the linear regression model when both response and covariate variables are subject to double censoring. The proposed estimators are constructed as some functional of three types of estimators for a bivariate survival distribution. The first two estimators are the generalizations of the Campbell and Földes (1982) and Dabrowska (1988) estimators proposed by Shen (2009). The third estimator is the generalization of the Prentice and Cai (1992) estimator. The consistency of the proposed M-estimators is established. A simulation study is conducted to investigate the performance of the proposed estimators. Furthermore, the simple bootstrap methods are used to estimate standard deviations and construct interval estimators. 相似文献
8.
Jean-François Quessy 《统计学通讯:理论与方法》2013,42(19):3510-3531
Population and sample versions of Kendall and Spearman measures of association suitable for multivariate ordinal data are defined. The latter generalize the indices of dependence of Ruymgaart and van Zuijlen (1978), Joe (1990), and Schmid and Schmidt (2007) by allowing atoms in the underlying distribution. The representation of the proposed empirical measures as U-statistics enables to establish their asymptotic normality under general distributions. A special attention is given to tests of independence for multivariate ordinal data, where the power of the new methodologies are investigated under fixed and contiguous alternatives. 相似文献
9.
ABSTRACTThe K-nearest-neighbor (Knn) method is known to be more suitable in fitting nonparametrically specified curves than the kernel method (with a globally fixed smoothing parameter) when data sets are highly unevenly distributed. In this paper, we propose to estimate a nonparametric regression function subject to a monotonicity restriction using the Knn method. We also propose using a new convergence criterion to measure the closeness between an unconstrained and the (monotone) constrained Knn-estimated curves. This method is an alternative to the monotone kernel methods proposed by Hall and Huang (2001), and Du et al. (2013). We use a bootstrap procedure for testing the validity of the monotone restriction. We apply our method to the “Job Market Matching” data taken from Gan and Li (2016) and find that the unconstrained/constrained Knn estimators work better than kernel estimators for this type of highly unevenly distributed data. 相似文献
10.
We propose a class of estimators for the population mean when there are missing data in the data set. Obtaining the mean square error equations of the proposed estimators, we show the conditions where the proposed estimators are more efficient than the sample mean, ratio-type estimators, and the estimators in Singh and Horn (2000) and Singh and Deo (2003) in the case of missing data. These conditions are also supported by a numerical example. 相似文献
11.
We consider a new generalization of the skew-normal distribution introduced by Azzalini (1985). We denote this distribution Beta skew-normal (BSN) since it is a special case of the Beta generated distribution (Jones, 2004). Some properties of the BSN are studied. We pay attention to some generalizations of the skew-normal distribution (Bahrami et al., 2009; Sharafi and Behboodian, 2008; Yadegari et al., 2008) and to their relations with the BSN. 相似文献
12.
Julián de la Horra 《统计学通讯:理论与方法》2013,42(10):1905-1914
The positive false discovery rate was introduced by Storey (2003) as an alternative to the family wise error rate for the case in which we are simultaneously testing a large amount of hypotheses. The positive false discovery rate has a very nice Bayesian interpretation (as it was shown by Storey, 2003) and its robustness is analyzed. The emphasis is on the ε-contamination class (one of the most used classes of priors for Bayesian robustness) and it is shown that robustness is not obtained when the basic prior concentrates the probability on the null hypothesis. 相似文献
13.
14.
Nripes Kumar Mandal 《统计学通讯:理论与方法》2013,42(11):1989-1999
This article studies a mixture-amount model, which is quadratic both in the proportions of mixing components and the amount of mixture. Using the pseudo-Bayesian approach of Pal and Mandal (2006), it attempts to find the A-optimal design for the estimation of the optimum mixing proportions and the optimum amount. 相似文献
15.
In this article, a multivariate threshold varying conditional correlation (TVCC) model is proposed. The model extends the idea of Engle (2002) and Tse and Tsui (2002) to a threshold framework. This model retains the interpretation of the univariate threshold GARCH model and allows for dynamic conditional correlations. Techniques of model identification, estimation, and model checking are developed. Some simulation results are reported on the finite sample distribution of the maximum likelihood estimate of the TVCC model. Real examples demonstrate the asymmetric behavior of the mean and the variance in financial time series and the ability of the TVCC model to capture these phenomena. 相似文献
16.
《统计学通讯:理论与方法》2012,41(1):243-256
AbstractTakahasi and Wakimoto (1968) derived a sharp upper bound on the efficiency of the balanced ranked-set sampling (RSS) sample mean relative to the simple random sampling (SRS) sample mean under perfect rankings. The bound depends on the set size and is achieved for uniform distributions. Here we generalize the Takahasi and Wakimoto (1968) result by finding a sharp upper bound in the case of unbalanced RSS. The bound depends on the particular unbalanced design, and the distributions where the bound is achieved can be highly nonuniform. The bound under perfect rankings can be exceeded under imperfect rankings. 相似文献
17.
《Econometric Reviews》2013,32(3):269-287
Abstract In many applications, a researcher must select an instrument vector from a candidate set of instruments. If the ultimate objective is to perform inference about the unknown parameters using conventional asymptotic theory, then we argue that it is desirable for the chosen instrument vector to satisfy four conditions which we refer to as orthogonality, identification, efficiency, and non‐redundancy. It is impossible to verify a priori which elements of the candidate set satisfy these conditions; this can only be done using the data. However, once the data are used in this fashion it is important that the selection process does not contaminate the limiting distribution of the parameter estimator. We refer to this requirement as the inference condition. In a recent paper, Andrews [[Andrews, D. W. K. (1999)]. Consistent moment selection procedures for generalized method of moments estimation. Econometrica67:543–564] has proposed a method of moment selection based on an information criterion involving the overidentifying restrictions test. This method can be shown to select an instrument vector which satisfies the orthogonality condition with probability one in the limit. In this paper, we consider the problem of instrument selection based on a combination of the efficiency and non‐redundancy conditions which we refer to as the relevance condition. It is shown that, within a particular class of models, certain canonical correlations form the natural metric for relevancy, and this leads us to propose a canonical correlations information criterion (CCIC) for instrument selection. We establish conditions under which our method satisfies the inference condition. We also consider the properties of an instrument selection method based on the sequential application of [Andrews, D. W. K. (1999)]. Consistent moment selection procedures for generalized method of moments estimation. Econometrica67:543–564 method and CCIC. 相似文献
18.
Hong Zhang 《统计学通讯:理论与方法》2013,42(7):1228-1241
Sa and Edwards (1993) first proposed the Multiple Comparisons with a Control problem in Response Surface Methodology. They provided an exact solution for one predictor variable and a conservative solution when number of predictor variables is more than one. Merchant et al. (1998) improved the solution for the latter case. This article improves Merchant et al.'s solution for the case of rotatable designs in two predictor variables. 相似文献
19.
Tarasińska (2005) considered a method to construct the shortest length confidence interval on the power of the t-test using a confidence interval for the population standard deviation in the non centrality parameter. Gilliland and Li (2008) used simulations to show that this confidence interval has less than the nominal coverage, particularly in small samples. We propose to find the shortest expected length confidence interval for the power of the t-test by accounting for the variation in the sample standard deviation, and provide the necessary constants for its implementation for some selected sample and shift sizes. It is seen that the proposed interval is reasonably robust to the specification of the population standard deviation and maintains the nominal coverage. 相似文献
20.
In this article, our objective is to evaluate the performance of different tests which are used to compare the equality of more than two location parameters. We have considered six tests (including some commonly used) in this study, one of which is parametric and the others are nonparametric. These tests include the usual F test (Fisher and Mackenzie, 1923), Kruskal–Wallis test (Kruskall and Wallis, 1952), Kolmogorov–Smirnov test (David, 1958), the g test (Stekler, 1987), f test (Batchelor, 1990), and Extension of Median test (as given in Daniel, 1990). Performance of these tests are compared under different symmetric, skewed and contaminated probability distributions that include Normal, Cauchy, Uniform, Laplace, Lognormal, Exponential, Weibull, Gamma, t, Chi-square, Half Normal, Mixed Weibull, and Mixed Normal. Performances of these tests are measured in terms of power. We have suggested appropriate tests which may perform better under different situations. It is expected that researchers will find these results useful in decision making. 相似文献