共查询到20条相似文献,搜索用时 453 毫秒
1.
Recently, Zografos and Nadarajah (2005) proposed two measures of uncertainty based on the survival function, called the survival exponential entropy and the generalized survival exponential entropy. In this article, we explore properties of the generalized survival entropy and the dynamic version of it. We study conditions under which the generalized survival entropy of first order statistic can uniquely determines the parent distribution. The exponential, Pareto, and finite range distributions, which are commonly used in reliability, have been characterized using this generalized measure. Another measure of entropy is also introduced in analogy with cumulative entropy which has been proposed by Di Crescenzo and Longobardi (2009) and some properties of it are given. 相似文献
2.
M. Mirali 《统计学通讯:理论与方法》2017,46(22):11047-11059
Some extensions of Shannon entropy to the survival function have been recently proposed. Misagh et al. (2011) introduced weighted cumulative residual entropy (WCRE) that was studied more by Mirali et al. (2015). In this article, the dynamic version of WCRE is proposed. Some relationships of this measure with well-known reliability measures and ageing classes are studied and some characterization results for exponential and Rayleigh distributions are provided. Also, a non parametric estimation of dynamic version of WCRE is introduced and its asymptotic behavior is investigated. 相似文献
3.
Vikas Kumar 《统计学通讯:理论与方法》2017,46(17):8343-8354
In this article, the concept of cumulative residual entropy (CRE) given by Rao et al. (2004) is extended to Tsallis entropy function and dynamic version, both residual and past of it. We study some properties and characterization results for these generalized measures. In addition, we provide some characterization results of the first-order statistic based on the Tsallis survival entropy. 相似文献
4.
In analogy with the weighted Shannon entropy proposed by Belis and Guiasu (1968) and Guiasu (1986), we introduce a new information measure called weighted cumulative residual entropy (WCRE). This is based on the cumulative residual entropy (CRE), which is introduced by Rao et al. (2004). This new information measure is “length-biased” shift dependent that assigns larger weights to larger values of random variable. The properties of WCRE and a formula relating WCRE and weighted Shannon entropy are given. Related studies of reliability theory is covered. Our results include inequalities and various bounds to the WCRE. Conditional WCRE and some of its properties are discussed. The empirical WCRE is proposed to estimate this new information measure. Finally, strong consistency and central limit theorem are provided. 相似文献
5.
Recently, Feizjavadian and Hashemi (2015) introduced and studied the mean residual weighted (MRW) distribution as an alternative to the length-biased distribution, by using the concepts of the mean residual lifetime and the cumulative residual entropy (CRE). In this article, a new sequence of weighted distributions is introduced based on the generalized CRE. This sequence includes the MRW distribution. Properties of this sequence are obtained generalizing and extending previous results on the MRW distribution. Moreover, expressions for some known distributions are given, and finite mixtures between the new sequence of weighted distributions and the length-biased distribution are studied. Numerical examples are given to illustrate the new results. 相似文献
6.
In this article, we consider two different shared frailty regression models under the assumption of Gompertz as baseline distribution. Mostly assumption of gamma distribution is considered for frailty distribution. To compare the results with gamma frailty model, we consider the inverse Gaussian shared frailty model also. We compare these two models to a real life bivariate survival data set of acute leukemia remission times (Freireich et al., 1963). Analysis is performed using Markov Chain Monte Carlo methods. Model comparison is made using Bayesian model selection criterion and a well-fitted model is suggested for the acute leukemia data. 相似文献
7.
Junyong Park Jayson D. Wilbur Jayanta K. Ghosh Cindy H. Nakatsu Corinne Ackerman 《统计学通讯:模拟与计算》2013,42(4):855-869
We adopt boosting for classification and selection of high-dimensional binary variables for which classical methods based on normality and non singular sample dispersion are inapplicable. Boosting seems particularly well suited for binary variables. We present three methods of which two combine boosting with the relatively classical variable selection methods developed in Wilbur et al. (2002). Our primary interest is variable selection in classification with small misclassification error being used as validation of proposed method for variable selection. Two of the new methods perform uniformly better than Wilbur et al. (2002) in one set of simulated and three real life examples. 相似文献
8.
Recently, Abbasnejad et al. (2010) proposed a measure of uncertainty based on survival function, called the survival entropy of order α. A dynamic form of the survival entropy of order α is also proposed by them. In this paper, we derive the weighted form of these measures. The properties of the new measures are also discussed. 相似文献
9.
《统计学通讯:理论与方法》2013,42(8-9):1497-1506
Since Rao introduced the Quadratic Entropy (QE) in 1982, results on mathematical and statistical properties of the QE and its applications in data analysis and population indices have been published in the literature. In this paper, we study the asymptotic efficiency of the analysis of Rao's quadratic entropy (ANOQE) which is a generalization of the classical analysis of variance (ANOVA). Based on the results of Liu and Rao [1]and Liu [2]on asymptotic distribution and the bootstrap of the ANOQE, we derive the Bahadur's asymptotic efficiency of the ANOQE and compare efficiency of ANOQE tests based on different QE's. 相似文献
10.
Viswanathan Ramakrishnan 《统计学通讯:模拟与计算》2013,42(3):405-418
In many genetic analyses of dichotomous twin data, odds ratios have been used to test hypotheses on heritability and shared common environment effects of a given disease (Lichtenstein et al., 2000; Ahlbom et al., 1997; Ramakrishnan et al., 1992, 4). However, estimates of these two effects have not been dealt with in the literature. In epidemiology, the attributable fraction (AF), a function of the odds ratio and the prevalence of the risk factor has been used to describe the contribution of a risk factor to a disease in a given population (Leviton, 1973). In this article, we adapt the AF to quantify the heritability and the shared common environment. Twin data on cancer, gallstone disease and phobia are used to illustrate the applicability of the AF estimate as a measure of heritability. 相似文献
11.
Tony Vangeneugden Geert Molenberghs Geert Verbeke Clarice G.B. Demétrio 《统计学通讯:理论与方法》2014,43(19):4164-4178
In hierarchical data settings, be it of a longitudinal, spatial, multi-level, clustered, or otherwise repeated nature, often the association between repeated measurements attracts at least part of the scientific interest. Quantifying the association frequently takes the form of a correlation function, including but not limited to intraclass correlation. Vangeneugden et al. (2010) derived approximate correlation functions for longitudinal sequences of general data type, Gaussian and non-Gaussian, based on generalized linear mixed-effects models. Here, we consider the extended model family proposed by Molenberghs et al. (2010). This family flexibly accommodates data hierarchies, intra-sequence correlation, and overdispersion. The family allows for closed-form means, variance functions, and correlation function, for a variety of outcome types and link functions. Unfortunately, for binary data with logit link, closed forms cannot be obtained. This is in contrast with the probit link, for which such closed forms can be derived. It is therefore that we concentrate on the probit case. It is of interest, not only in its own right, but also as an instrument to approximate the logit case, thanks to the well-known probit-logit ‘conversion.’ Next to the general situation, some important special cases such as exchangeable clustered outcomes receive attention because they produce insightful expressions. The closed-form expressions are contrasted with the generic approximate expressions of Vangeneugden et al. (2010) and with approximations derived for the so-called logistic-beta-normal combined model. A simulation study explores performance of the method proposed. Data from a schizophrenia trial are analyzed and correlation functions derived. 相似文献
12.
A Bottom-Up Dynamic Model of Portfolio Credit Risk with Stochastic Intensities and Random Recoveries
Tomasz R. Bielecki Areski Cousin Stéphane Crépey Alexander Herbertsson 《统计学通讯:理论与方法》2014,43(7):1362-1389
In Bielecki et al. (2014a), the authors introduced a Markov copula model of portfolio credit risk where pricing and hedging can be done in a sound theoretical and practical way. Further theoretical backgrounds and practical details are developed in Bielecki et al. (2014b,c) where numerical illustrations assumed deterministic intensities and constant recoveries. In the present paper, we show how to incorporate stochastic default intensities and random recoveries in the bottom-up modeling framework of Bielecki et al. (2014a) while preserving numerical tractability. These two features are of primary importance for applications like CVA computations on credit derivatives (Assefa et al., 2011; Bielecki et al., 2012), as CVA is sensitive to the stochastic nature of credit spreads and random recoveries allow to achieve satisfactory calibration even for “badly behaved” data sets. This article is thus a complement to Bielecki et al. (2014a), Bielecki et al. (2014b) and Bielecki et al. (2014c). 相似文献
13.
This paper presents a new variable weight method, called the singular value decomposition (SVD) approach, for Kohonen competitive learning (KCL) algorithms based on the concept of Varshavsky et al. [18]. Integrating the weighted fuzzy c-means (FCM) algorithm with KCL, in this paper, we propose a weighted fuzzy KCL (WFKCL) algorithm. The goal of the proposed WFKCL algorithm is to reduce the clustering error rate when data contain some noise variables. Compared with the k-means, FCM and KCL with existing variable-weight methods, the proposed WFKCL algorithm with the proposed SVD's weight method provides a better clustering performance based on the error rate criterion. Furthermore, the complexity of the proposed SVD's approach is less than Pal et al. [17], Wang et al. [19] and Hung et al. [9]. 相似文献
14.
Soo Hak Sung 《统计学通讯:理论与方法》2013,42(9):1663-1674
A complete convergence theorem for an array of rowwise independent random variables was established by Sung et al. (2005). This result has been generalized and extended by Kruglov et al. (2006) and Chen et al. (2007). In this article, we extend the results of Sung et al. (2005), Kruglov et al. (2006), and Chen et al. (2007) to an array of dependent random variables satisfying Hoffmann-Jørgensen type inequalities. 相似文献
15.
The main purpose of this paper is to investigate the strong approximation of the integrated empirical process. More precisely, we obtain the exact rate of the approximations by a sequence of weighted Brownian bridges and a weighted Kiefer process. Our arguments are based in part on the Komlós et al. (1975)'s results. Applications include the two-sample testing procedures together with the change-point problems. We also consider the strong approximation of the integrated empirical process when the parameters are estimated. Finally, we study the behavior of the self-intersection local time of the partial-sum process representation of the integrated empirical process. 相似文献
16.
Shesh N. Rai Jianmin Pan Xiaobin Yuan Jianguo Sun Melissa M. Hudson Deo K. Srivastava 《统计学通讯:理论与方法》2013,42(17):3117-3133
New drug discovery in the pediatrics has dramatically improved survival, but with long- term adverse events. This motivates the examination of adverse outcomes such as long-term toxicity in a phase IV trial. An ideal approach to monitor long-term toxicity is to systematically follow the survivors, which is generally not feasible. Instead, cross-sectional surveys are conducted in Hudson et al. (2007), with one of the objectives to estimate the cumulative incidence rates along with specific interest in fixed-term (5 or 10 year) rates. We present inference procedures based on current status data to our motivating example with very interesting findings. 相似文献
17.
Jigao Yan 《统计学通讯:理论与方法》2013,42(20):5074-5098
AbstractIn this paper, the complete convergence for maximal weighted sums of extended negatively dependent (END, for short) random variables is investigated. Some sufficient conditions for the complete convergence and some applications to a nonparametric model are provided. The results obtained in the paper generalize and improve the corresponding ones of Wang et al. (2014b) and Shen, Xue, and Wang (2017). 相似文献
18.
Zero-inflated Poisson mixed regression models are popular approaches to analyze clustered count data with excess zeros. Prior to application of these models, it is essential to examine the necessity of the adjustment for zero outcomes. The existing literature, however, has focused only on score tests for testing the suitability of zero-inflated models for correlated count data. In view of the observed bias and non-optimal size of score tests, it deserves further investigation of other alternative ways for the test. This article aims to explore the use of the null Wald and likelihood ratio tests for zero-inflation in correlated count data. Our simulation study shows that both the null Wald and likelihood ratio tests outperform the score test of Xiang et al. (2006) in terms of statistical power, regardless of the computational convenience of the score test. A bootstrap null Wald statistic is also proposed, which results in improved performance in terms of the size and power of the test. 相似文献
19.
The Significance Analysis of Microarrays (SAM; Tusher et al., 2001) method is widely used in analyzing gene expression data while controlling the FDR by using resampling-based procedure in the microarray setting. One of the main components of the SAM procedure is the adjustment of the test statistic. The introduction of the fudge factor to the test statistic aims at deflating the large value of test statistics due to the small standard error of gene-expression. Lin et al. (2008) pointed out that the fudge factor does not effectively improve the power and the control of the FDR as compared to the SAM procedure without the fudge factor in the presence of small variance genes. Motivated by the simulation results presented in Lin et al. (2008), in this article, we extend our study to compare several methods for choosing the fudge factor in the modified t-type test statistics and use simulation studies to investigate the power and the control of the FDR of the considered methods. 相似文献
20.
In this research, multiple dependent state and repetitive group sampling are used to design a variable sampling plan based on one-sided process capability indices, which consider the quality of the current lot as well as the quality of the preceding lots. The sample size and critical values of the proposed plan are determined by minimizing the average sample number while satisfying the producer's risk and consumer's risk at corresponding quality levels. In addition, comparisons are made with the existing sampling plans [Pearn and Wu (2006a), Yen et al. (2015)] in terms of average sample number and operating characteristic curve. Finally, an example is provided to illustrate the proposed plan. 相似文献