共查询到20条相似文献,搜索用时 31 毫秒
1.
AbstractIn this article, we improvise Singh and Grewal (2013) and Hussain et al. (2016) techniques by introducing a new two-stage randomization response process. Using the proposed new technique, we achieve better efficiency and increasing protection of privacy of respondents than the Kuk (1990), Singh and Grewal (2013) and Hussain et al. (2016) models. The relative efficiency and protection of the respondents of the proposed two-stage randomization device have been investigated through simulation study, and the situations are reported where the proposed estimator performs better than its competitors. The SAS code used to investigate the performance of the proposed strategy are also provided. 相似文献
2.
Here, we apply the smoothing technique proposed by Chaubey et al. (2007) for the empirical survival function studied in Bagai and Prakasa Rao (1991) for a sequence of stationary non-negative associated random variables.The derivative of this estimator in turn is used to propose a nonparametric density estimator. The asymptotic properties of the resulting estimators are studied and contrasted with some other competing estimators. A simulation study is carried out comparing the recent estimator based on the Poisson weights (Chaubey et al., 2011) showing that the two estimators have comparable finite sample global as well as local behavior. 相似文献
3.
The Significance Analysis of Microarrays (SAM; Tusher et al., 2001) method is widely used in analyzing gene expression data while controlling the FDR by using resampling-based procedure in the microarray setting. One of the main components of the SAM procedure is the adjustment of the test statistic. The introduction of the fudge factor to the test statistic aims at deflating the large value of test statistics due to the small standard error of gene-expression. Lin et al. (2008) pointed out that the fudge factor does not effectively improve the power and the control of the FDR as compared to the SAM procedure without the fudge factor in the presence of small variance genes. Motivated by the simulation results presented in Lin et al. (2008), in this article, we extend our study to compare several methods for choosing the fudge factor in the modified t-type test statistics and use simulation studies to investigate the power and the control of the FDR of the considered methods. 相似文献
4.
Shesh N. Rai Jianmin Pan Xiaobin Yuan Jianguo Sun Melissa M. Hudson Deo K. Srivastava 《统计学通讯:理论与方法》2013,42(17):3117-3133
New drug discovery in the pediatrics has dramatically improved survival, but with long- term adverse events. This motivates the examination of adverse outcomes such as long-term toxicity in a phase IV trial. An ideal approach to monitor long-term toxicity is to systematically follow the survivors, which is generally not feasible. Instead, cross-sectional surveys are conducted in Hudson et al. (2007), with one of the objectives to estimate the cumulative incidence rates along with specific interest in fixed-term (5 or 10 year) rates. We present inference procedures based on current status data to our motivating example with very interesting findings. 相似文献
5.
In this article, we consider two different shared frailty regression models under the assumption of Gompertz as baseline distribution. Mostly assumption of gamma distribution is considered for frailty distribution. To compare the results with gamma frailty model, we consider the inverse Gaussian shared frailty model also. We compare these two models to a real life bivariate survival data set of acute leukemia remission times (Freireich et al., 1963). Analysis is performed using Markov Chain Monte Carlo methods. Model comparison is made using Bayesian model selection criterion and a well-fitted model is suggested for the acute leukemia data. 相似文献
6.
In an earlier article (Bai et al., 1999), the problem of simultaneous estimation of the number of signals and frequencies of multiple sinusoids is considered in the case that some observations are missing. The number of signals is estimated with an information theoretic criterion and the frequencies are estimated with eigenvariation linear prediction. Asymptotic properties of the procedure are investigated but the Monte Carlo simulation is not performed. In this article, a slightly different but scale invariant criterion for detection is proposed and the estimation of frequencies remains the same. Asymptotic properties of this new procedure are provided. Monte Carlo Simulation for both procedures is carried out. Furthermore, comparison on the real signals is also given. 相似文献
7.
8.
A Bottom-Up Dynamic Model of Portfolio Credit Risk with Stochastic Intensities and Random Recoveries
Tomasz R. Bielecki Areski Cousin Stéphane Crépey Alexander Herbertsson 《统计学通讯:理论与方法》2014,43(7):1362-1389
In Bielecki et al. (2014a), the authors introduced a Markov copula model of portfolio credit risk where pricing and hedging can be done in a sound theoretical and practical way. Further theoretical backgrounds and practical details are developed in Bielecki et al. (2014b,c) where numerical illustrations assumed deterministic intensities and constant recoveries. In the present paper, we show how to incorporate stochastic default intensities and random recoveries in the bottom-up modeling framework of Bielecki et al. (2014a) while preserving numerical tractability. These two features are of primary importance for applications like CVA computations on credit derivatives (Assefa et al., 2011; Bielecki et al., 2012), as CVA is sensitive to the stochastic nature of credit spreads and random recoveries allow to achieve satisfactory calibration even for “badly behaved” data sets. This article is thus a complement to Bielecki et al. (2014a), Bielecki et al. (2014b) and Bielecki et al. (2014c). 相似文献
9.
Suchandan Kayal 《统计学通讯:理论与方法》2018,47(20):4938-4957
Several probability distributions such as power-Pareto distribution (see Gilchrist 2000 and Hankin and Lee 2006), various forms of lambda distributions (see Ramberg and Schmeiser 1974 and Freimer et al. 1988), Govindarajulu distribution (see Nair, Sankaran, and Vineshkumar 2012), etc., do not have manageable distribution functions, though they have tractable quantile functions. Hence, analytical study of the properties of Chernoff distance of two random variables associated with these distributions via traditional distribution function-based tool becomes difficult. To make this simple, in this paper, we introduce quantile-based Chernoff distance for (left or right) truncated random variables and study its various properties. Some useful bounds as well as characterization results are obtained. 相似文献
10.
Accelerated failure time models are useful in survival data analysis, but such models have received little attention in the context of measurement error. In this paper we discuss an accelerated failure time model for bivariate survival data with covariates subject to measurement error. In particular, methods based on the marginal and joint models are considered. Consistency and efficiency of the resultant estimators are investigated. Simulation studies are carried out to evaluate the performance of the estimators as well as the impact of ignoring the measurement error of covariates. As an illustration we apply the proposed methods to analyze a data set arising from the Busselton Health Study (Knuiman et al., 1994). 相似文献
11.
Viswanathan Ramakrishnan 《统计学通讯:模拟与计算》2013,42(3):405-418
In many genetic analyses of dichotomous twin data, odds ratios have been used to test hypotheses on heritability and shared common environment effects of a given disease (Lichtenstein et al., 2000; Ahlbom et al., 1997; Ramakrishnan et al., 1992, 4). However, estimates of these two effects have not been dealt with in the literature. In epidemiology, the attributable fraction (AF), a function of the odds ratio and the prevalence of the risk factor has been used to describe the contribution of a risk factor to a disease in a given population (Leviton, 1973). In this article, we adapt the AF to quantify the heritability and the shared common environment. Twin data on cancer, gallstone disease and phobia are used to illustrate the applicability of the AF estimate as a measure of heritability. 相似文献
12.
In statistical process control applications, the multivariate T 2 control chart based on Hotelling's T 2 statistic is useful for detecting the presence of special causes of variation. In particular, use of the T 2 statistic based on the successive differences covariance matrix estimator has been shown to be very effective in detecting the presence of a sustained step or ramp shift in the mean vector. However, the exact distribution of this statistic is unknown. In this article, we derive the maximum value of the T 2 statistic based on the successive differences covariance matrix estimator. This distributional property is crucial for calculating an approximate upper control limit of a T 2 control chart based on successive differences, as described in Williams et al. (2006). 相似文献
13.
Skew-symmetric distributions of various types have been the center of attraction by many researchers in the literature. In this article, we will introduce a uni/bimodal generalization of the Azzalini's skew-normal distribution which is indeed an extension of the skew-generalized normal distribution obtained by Arellano-Valle et al. (2004). Our new distribution contains more parameters and thus it is more flexible in data modeling. Indeed, certain univariate case of the so called flexible skew-symmetric distribution of Ma and Genton (2004) is also a particular case of our proposed model. We will first study some basic distributional properties of the new extension, such as its distribution function, limiting behavior and moments. Then, we will investigate some useful results regarding its relation with other known distributions, such as student's t and skew-Cauchy distributions. In addition, we will present certain methods to generate the new distribution and, finally, we shall apply the model to a real data set to illustrate its behavior comparing to some rival models. 相似文献
14.
Based on the work of Khalaf and Shukur (2005), Alkhamisi et al. (2006), and Muniz et al. (2010), this article considers several estimators for estimating the ridge parameter k. This article differs from aforementioned articles in three ways: (1) Data are generated from Normal, Student's t, and F distributions with appropriate degrees of freedom; (2) The number of regressors considered are from 4–12 instead of 2–4, which are the usual practice; (3) Both mean square error (MSE) and prediction sum of square (PRESS) are considered as the performance criterion. A simulation study has been conducted to compare the performance of the estimators. Based on the simulation study we found that, increasing the correlation between the independent variables has negative effect on the MSE and PRESS. However, increasing the number of regressors has positive effect on MSE and PRESS. When the sample size increases the MSE decreases even when the correlation between the independent variables is large. It is interesting to note that the dominance pictures of the estimators are remained the same under both the MSE and PRESS criterion. However, the performance of the estimators depends on the choice of the assumption of the error distribution of the regression model. 相似文献
15.
Soo Hak Sung 《统计学通讯:理论与方法》2013,42(9):1663-1674
A complete convergence theorem for an array of rowwise independent random variables was established by Sung et al. (2005). This result has been generalized and extended by Kruglov et al. (2006) and Chen et al. (2007). In this article, we extend the results of Sung et al. (2005), Kruglov et al. (2006), and Chen et al. (2007) to an array of dependent random variables satisfying Hoffmann-Jørgensen type inequalities. 相似文献
16.
Formulae for the first and second order inclusion probabilities for the Rao et al. (1962) (RHC) scheme of sampling are derived. They enable one to evaluate, for a sample drawn according to the RHC scheme, the Horvitz and Thompson's (1952) estimator (HTE) along with its unbiased variance estimator given by Yates and Grundy (1953). So, for a sample at hand thus drawn one may choose between the RHCE and the HTE for use on finding which one has the smaller coefficient of variation. 相似文献
17.
Consider a skewed population. Suppose an intelligent guess could be made about an interval that contains the population mean. There may exist biased estimators with smaller mean squared error than the arithmetic mean within such an interval. This article indicates when it is advisable to shrink the arithmetic mean towards a guessed interval using root estimators. The goal is to obtain an estimator that is better near the average of natural origins. An estimator proposed. This estimator contains the Thompson (1968) ordinary shrinkage estimator, the Jenkins et al. (1973) square-root estimator, and the arithmetic sample mean as special cases. The bias and the mean squared error of the proposed more general estimator is compared with the three special cases. Shrinkage coefficients that yield minimum mean squared error estimators are obtained. The proposed estimator is considerably more efficient than the three special cases. This remains true for highly skewed populations. The merits of the proposed shrinkage square-root estimator are supported by the results of numerical and simulation studies. 相似文献
18.
Shun Matsuura 《统计学通讯:理论与方法》2013,42(16):2863-2876
Selective assembly is an effective approach for improving a quality of a product assembled from two types of components, when the quality characteristic is the clearance between the mating components. Mease et al. (2004) have extensively studied optimal binning strategies under squared error loss in selective assembly, especially for the case when two types of component dimensions are identically distributed. However, the presence of measurement error in component dimensions has not been addressed. Here we study optimal binning strategies under squared error loss when measurement error is present. We give the equations for the optimal partition limits minimizing expected squared error loss, and show that the solution to them is unique when the component dimensions and the measurement errors are normally distributed. We then compare the expected losses of the optimal binning strategies with and without measurement error for normal distribution, and also evaluate the influence of the measurement error. 相似文献
19.
For each positive integer k, a set of k-principal points of a distribution is the set of k points that optimally represent the distribution in terms of mean squared distance. However, explicit form of k-principal points is often difficult to obtain. Hence a theorem established by Tarpey et al. (1995) has been influential in the literature, which states that when the distribution is elliptically symmetric, any set of k-principal points is in the linear subspace spanned by some principal eigenvectors of the covariance matrix. This theorem is called a “principal subspace theorem”. Recently, Yamamoto and Shinozaki (2000b) derived a principal subspace theorem for 2-principal points of a location mixture of spherically symmetric distributions. In their article, the ratio of mixture was set to be equal. This article derives a further result by considering a location mixture with unequal mixture ratio. 相似文献
20.
Tony Vangeneugden Geert Verbeke Clarice G.B. Demétrio 《Journal of applied statistics》2011,38(2):215-232
Vangeneugden et al. [15] derived approximate correlation functions for longitudinal sequences of general data type, Gaussian and non-Gaussian, based on generalized linear mixed-effects models (GLMM). Their focus was on binary sequences, as well as on a combination of binary and Gaussian sequences. Here, we focus on the specific case of repeated count data, important in two respects. First, we employ the model proposed by Molenberghs et al. [13], which generalizes at the same time the Poisson-normal GLMM and the conventional overdispersion models, in particular the negative-binomial model. The model flexibly accommodates data hierarchies, intra-sequence correlation, and overdispersion. Second, means, variances, and joint probabilities can be expressed in closed form, allowing for exact intra-sequence correlation expressions. Next to the general situation, some important special cases such as exchangeable clustered outcomes are considered, producing insightful expressions. The closed-form expressions are contrasted with the generic approximate expressions of Vangeneugden et al. [15]. Data from an epileptic-seizures trial are analyzed and correlation functions derived. It is shown that the proposed extension strongly outperforms the classical GLMM. 相似文献