共查询到20条相似文献,搜索用时 187 毫秒
1.
In this article, we consider two different shared frailty regression models under the assumption of Gompertz as baseline distribution. Mostly assumption of gamma distribution is considered for frailty distribution. To compare the results with gamma frailty model, we consider the inverse Gaussian shared frailty model also. We compare these two models to a real life bivariate survival data set of acute leukemia remission times (Freireich et al., 1963). Analysis is performed using Markov Chain Monte Carlo methods. Model comparison is made using Bayesian model selection criterion and a well-fitted model is suggested for the acute leukemia data. 相似文献
2.
Qian Chen 《统计学通讯:模拟与计算》2013,42(4):789-804
We consider the relative merits of various saddlepoint approximations for the cumulative distribution function (cdf) of a statistic with a possibly non normal limit distribution. In addition to the usual Lugannani-Rice approximation, we also consider approximations based on higher-order expansions, including the case where the base distribution for the approximation is taken to be non normal. This extends earlier work by Wood et al. (1993). These approximations are applied to the distribution of the Anderson-Darling test statistic. While these generalizations perform well in the middle of the distribution's support, a conventional normal-based Lugannani-Rice approximation (Giles, 2001) is superior for conventional critical regions. 相似文献
3.
Junyong Park Jayson D. Wilbur Jayanta K. Ghosh Cindy H. Nakatsu Corinne Ackerman 《统计学通讯:模拟与计算》2013,42(4):855-869
We adopt boosting for classification and selection of high-dimensional binary variables for which classical methods based on normality and non singular sample dispersion are inapplicable. Boosting seems particularly well suited for binary variables. We present three methods of which two combine boosting with the relatively classical variable selection methods developed in Wilbur et al. (2002). Our primary interest is variable selection in classification with small misclassification error being used as validation of proposed method for variable selection. Two of the new methods perform uniformly better than Wilbur et al. (2002) in one set of simulated and three real life examples. 相似文献
4.
Hidetoshi Murakami 《统计学通讯:模拟与计算》2013,42(10):2214-2219
The approximation for the distribution function of test statistic is extremely important in statistics. The standard and higher-order saddlepoint approximations are considered in tails of the limiting distribution for the modified Anderson–Darling test. The saddlepoint approximations are compared with the approximation of Sinclair et al. (1990) for upper tail area. An empirical function is derived to estimate the critical values of a saddlepoint approximation. 相似文献
5.
Viswanathan Ramakrishnan 《统计学通讯:模拟与计算》2013,42(3):405-418
In many genetic analyses of dichotomous twin data, odds ratios have been used to test hypotheses on heritability and shared common environment effects of a given disease (Lichtenstein et al., 2000; Ahlbom et al., 1997; Ramakrishnan et al., 1992, 4). However, estimates of these two effects have not been dealt with in the literature. In epidemiology, the attributable fraction (AF), a function of the odds ratio and the prevalence of the risk factor has been used to describe the contribution of a risk factor to a disease in a given population (Leviton, 1973). In this article, we adapt the AF to quantify the heritability and the shared common environment. Twin data on cancer, gallstone disease and phobia are used to illustrate the applicability of the AF estimate as a measure of heritability. 相似文献
6.
This article extends the correlation methodology developed by Chinchilli et al. (2005) for the 2 × 2 crossover design to more complex crossover designs for clinical trials. We describe how the methodology can be adapted to a general type of two-treatment crossover design which includes either at least two sequences or at least two treatment periods or both. We then derive the asymptotic theory for the corresponding correlation statistics, investigate the statistical accuracy of the estimators via bootstrap analyses, and demonstrate their use with two real data examples. 相似文献
7.
This paper is based on the application of a Bayesian model to a clinical trial study to determine a more effective treatment to lower mortality rates and consequently to increase survival times among patients with lung cancer. In this study, Qian et al. [13] strived to determine if a Weibull survival model can be used to decide whether to stop a clinical trial. The traditional Gibbs sampler was used to estimate the model parameters. This paper proposes to use the independent steady-state Gibbs sampling (ISSGS) approach, introduced by Dunbar et al. [3], to improve the original Gibbs sampler in multidimensional problems. It is demonstrated that ISSGS provides accuracy with unbiased estimation and improves the performance and convergence of the Gibbs sampler in this application. 相似文献
8.
A Bottom-Up Dynamic Model of Portfolio Credit Risk with Stochastic Intensities and Random Recoveries
Tomasz R. Bielecki Areski Cousin Stéphane Crépey Alexander Herbertsson 《统计学通讯:理论与方法》2014,43(7):1362-1389
In Bielecki et al. (2014a), the authors introduced a Markov copula model of portfolio credit risk where pricing and hedging can be done in a sound theoretical and practical way. Further theoretical backgrounds and practical details are developed in Bielecki et al. (2014b,c) where numerical illustrations assumed deterministic intensities and constant recoveries. In the present paper, we show how to incorporate stochastic default intensities and random recoveries in the bottom-up modeling framework of Bielecki et al. (2014a) while preserving numerical tractability. These two features are of primary importance for applications like CVA computations on credit derivatives (Assefa et al., 2011; Bielecki et al., 2012), as CVA is sensitive to the stochastic nature of credit spreads and random recoveries allow to achieve satisfactory calibration even for “badly behaved” data sets. This article is thus a complement to Bielecki et al. (2014a), Bielecki et al. (2014b) and Bielecki et al. (2014c). 相似文献
9.
《统计学通讯:理论与方法》2013,42(9):1725-1735
Abstract The study of multivariate distributions of order k, two of which are the multivariate negative binomial of order k and the multinomial of the same order, was introduced in Philippou et al. (Philippou, A. N., Antzoulakos, D. L., Tripsiannis, G. A. (1988). Multivariate distributions of order k. Statistics and Probability Letters 7(3):207–216.), and Philippou et al. (Philippou, A. N., Antzoulakos, D. L., Tripsiannis, G. A. (1990). Multivariate distributions of order k, part II. Statistics and Probability Letters 10(1):29–35.). Recently, an order k (or cluster) generalized negative binomial distribution and a multivariate negative binomial distribution were derived in Sen and Jain (Sen, K., Jain, R. (1996). Cluster generalized negative binomial distribution. In: Borthakur et al. A. C., Eds.; Probability Models and Statistics Medhi Festschrift, A. J., on the Occasion of his 70th Birthday. New Age International Publishers: New Delhi, 227–241.) and Sen and Jain (Sen, K., Jain, R. (1997). A multivariate generalized Polya-Eggenberger probability model-first passage approach. Communications in Statistics-Theory and Methods 26:871–884.), respectively. In this paper, all four distributions are generalized to a multivariate generalized negative binomial distribution of order k by means of an appropriate sampling scheme and a first passage event. This new distribution includes as special cases several known and new multivariate distributions of order k, and gives rise in the limit to multivariate generalized logarithmic, Poisson and Borel-Tanner distributions of the same order. Applications are indicated. 相似文献
10.
This paper presents a new variable weight method, called the singular value decomposition (SVD) approach, for Kohonen competitive learning (KCL) algorithms based on the concept of Varshavsky et al. [18]. Integrating the weighted fuzzy c-means (FCM) algorithm with KCL, in this paper, we propose a weighted fuzzy KCL (WFKCL) algorithm. The goal of the proposed WFKCL algorithm is to reduce the clustering error rate when data contain some noise variables. Compared with the k-means, FCM and KCL with existing variable-weight methods, the proposed WFKCL algorithm with the proposed SVD's weight method provides a better clustering performance based on the error rate criterion. Furthermore, the complexity of the proposed SVD's approach is less than Pal et al. [17], Wang et al. [19] and Hung et al. [9]. 相似文献
11.
Simard et al. [16 17] proposed a transformation distance called “tangent distance” (TD) which can make pattern recognition be efficient. The key idea is to construct a distance measure which is invariant with respect to some chosen transformations. In this research, we provide a method using adaptive TD based on an idea inspired by “discriminant adaptive nearest neighbor” [7]. This method is relatively easy compared with many other complicated ones. A real handwritten recognition data set is used to illustrate our new method. Our results demonstrate that the proposed method gives lower classification error rates than those by standard implementation of neural networks and support vector machines and is as good as several other complicated approaches. 相似文献
12.
Shesh N. Rai Jianmin Pan Xiaobin Yuan Jianguo Sun Melissa M. Hudson Deo K. Srivastava 《统计学通讯:理论与方法》2013,42(17):3117-3133
New drug discovery in the pediatrics has dramatically improved survival, but with long- term adverse events. This motivates the examination of adverse outcomes such as long-term toxicity in a phase IV trial. An ideal approach to monitor long-term toxicity is to systematically follow the survivors, which is generally not feasible. Instead, cross-sectional surveys are conducted in Hudson et al. (2007), with one of the objectives to estimate the cumulative incidence rates along with specific interest in fixed-term (5 or 10 year) rates. We present inference procedures based on current status data to our motivating example with very interesting findings. 相似文献
13.
Suppose that some information is available for the particular factor. The experimenter may apply the technique of foldover to isolate the factor and the two-factor interactions involving this factor. In fact, under some situations this can be done by the method of semi-folding. We will discuss this property in detail in this article. Furthermore, we use the computer to search the corresponding optimal semi-folding design for the given 2 k?p designs that are tabulated in Chen et al. (1993) research. 相似文献
14.
The main purpose of this paper is to investigate the strong approximation of the integrated empirical process. More precisely, we obtain the exact rate of the approximations by a sequence of weighted Brownian bridges and a weighted Kiefer process. Our arguments are based in part on the Komlós et al. (1975)'s results. Applications include the two-sample testing procedures together with the change-point problems. We also consider the strong approximation of the integrated empirical process when the parameters are estimated. Finally, we study the behavior of the self-intersection local time of the partial-sum process representation of the integrated empirical process. 相似文献
15.
Samridhi Mehta 《统计学通讯:理论与方法》2018,47(16):4021-4028
Sihm et al. (2016) proposed an unrelated question binary optional randomized response technique (RRT) model for estimating the proportion of population that possess a sensitive characteristic and the sensitivity level of the question. In our work, decision theoretic approach has been followed to obtain Bayes estimates of the two parameters along with their corresponding minimal Bayes posterior expected losses (BPEL) using beta prior and squared error loss function (SELF). Relative losses are also examined to compare the performances of the Bayes estimates with those of the classical estimates obtained by Sihm et al. (2016). The results obtained are illustrated with the help of real survey data using non informative prior. 相似文献
16.
M. Pilar Alonso Asunción Beamonte Manuel Salvador 《Journal of applied statistics》2015,42(5):1043-1063
In this paper a methodology for the delineation of local labour markets (LLMs) using evolutionary algorithms is proposed. This procedure, based on that in Flórez-Revuelta et al. [13,14], introduces three modifications. First, initial groups of municipalities with a minimum size requirement are built using the travel time between them. Second, a not fully random initiation algorithm is proposed. And third, as a final stage of the procedure, a contiguity step is implemented. These modifications significantly decrease the computational times of the algorithm (up to a 99%) without any deterioration of the quality of the solutions. The optimization algorithm may give a set of potential solutions with very similar values with respect to the objective function what would lead to different partitions, both in terms of number of markets and their composition. In order to capture their common aspects an algorithm based on a cluster partitioning of k-means type is presented. This stage of the procedure also provides a ranking of LLMs foci useful for planners and administrations in decision-making processes on issues related to labour activities. Finally, to evaluate the performance of the algorithm a toy example with artificial data is analysed. The full methodology is illustrated through a real commuting data set of the region of Aragón (Spain). 相似文献
17.
For each positive integer k, a set of k-principal points of a distribution is the set of k points that optimally represent the distribution in terms of mean squared distance. However, explicit form of k-principal points is often difficult to obtain. Hence a theorem established by Tarpey et al. (1995) has been influential in the literature, which states that when the distribution is elliptically symmetric, any set of k-principal points is in the linear subspace spanned by some principal eigenvectors of the covariance matrix. This theorem is called a “principal subspace theorem”. Recently, Yamamoto and Shinozaki (2000b) derived a principal subspace theorem for 2-principal points of a location mixture of spherically symmetric distributions. In their article, the ratio of mixture was set to be equal. This article derives a further result by considering a location mixture with unequal mixture ratio. 相似文献
18.
This paper applies stratified random sampling using Neyman allocation to Mangat et al. (1992) unrelated question randomized response (RR) strategy for both completely truthful reporting and less than completely truthful reporting. It is shown that, for the prior information given, our new model is more efficient in terms of variance (in the case of completely truthful reporting) and mean square error (in terms of less than completely truthful reporting) than Kim and Elam's (2007) model. Numerical illustrations and graphs are also given in support of the present study. 相似文献
19.
Best et al. (Best, D. J., Rayner, J. C. W., O'Sullivan, M. G. (2000). Product maps for consumer categorical data. Food Quality and Preference, 11:91–97) suggested tests based on partitioning the X2 statistic into relevant components of location, dispersion, and skewness effects for testing equality of each effect for ordinal preference data. It is known that the chi-square approximation requires large counts for categories. For this purpose, in this study, we investigate a permutation approach for these statistics and compare the performance of these tests with simulation study. In addition, the permutation approach can be used to produce a product map that classifies the products. We illustrate the approach with a real data example. 相似文献
20.
Antonello D'Ambra 《统计学通讯:理论与方法》2014,43(6):1209-1221
Non Symmetric Correspondence Analysis (NSCA) (D'Ambra and Lauro, 1989) is a useful technique for analyzing a two-way contingency table. The key difference between the symmetrical and non symmetrical versions of correspondence analysis rests on the measure of the association used to quantify the relationship between the variables. For a two-way, or multi-way, contingency table, the Pearson chi-squared statistic is commonly used when it can be assumed that the categorical variables are symmetrically related. However, for a two-way table, it may be that one variable can be treated as a predictor variable and the second variable can be considered as a response variable. Yet, for such a variable structure, the Pearson chi-squared statistic is not an appropriate measure of the association. Instead, one may consider the Goodman-Kruskal tau index. In the case that there are more than two cross-classified variables, multivariate versions of the Goodman-Kruskal tau index can be considered. These include Marcotorchino's index (Marcotorchino, 1985) and Gray-Williams’ index (Gray and Williams, 1975). In this article, the Multiple non Symmetric Correspondence Analysis (MNSCA), along with the decomposition of the TAU by Gray-Williams in main effects and interaction (D'Ambra et al., 2011), is used for the evaluation of the innovative performance of the manufacturing enterprises in Campania. Finally, to identify a category which is statistically significant, the confidence ellipses have been proposed for the Multiple Non Symmetric Correspondence Analysis starting from the ellipses suggested by Beh (2010) for the symmetrical analysis. 相似文献