共查询到20条相似文献,搜索用时 147 毫秒
1.
Cristine C. Rom 《Serials Review》2013,39(2):65-69
AbstractThe company now known as DC began as National Periodicals, publishing anthology series such as Adventure Comics, More Fun Comics, and Detective Comics. Superman, the first true “super-hero,” appeared on the scene in a brief story in Action Comics no. 1 (June, 1938). Batman appeared not long after, in the pages of Detective Comics no. 27 (May 1939)|3-and the world was never the same. In the late 1940s, National absorbed its competitor Ail-American Comics (which published such series as Green Lantern, Aquaman, and Green Arrow) and changed the company's name to Detective Comics, “DC” for short. The merger made DC the largest comic book company until the 1950s, when interest in the medium dried up, and Dell, who at that time published Walt Disney's comic books, took over the top spot. 相似文献
2.
Tahani Coolen-Maturi 《Journal of applied statistics》2014,41(8):1721-1745
There are many situations where n objects are ranked by b>2 independent sources or observers and in which the interest is focused on agreement on the top rankings. Kendall's coefficient of concordance [10] assigns equal weights to all rankings. In this paper, a new coefficient of concordance is introduced which is more sensitive to agreement on the top rankings. The limiting distribution of the new concordance coefficient under the null hypothesis of no association among the rankings is presented, and a summary of the exact and approximate quantiles for this coefficient is provided. A simulation study is carried out to compare the performance of Kendall's, the top-down and the new concordance coefficients in detecting the agreement on the top rankings. Finally, examples are given for illustration purposes, including a real data set from financial market indices. 相似文献
3.
The purpose of this article is to explain cross-validation and describe its use in regression. Because replicability analyses are not typically employed in studies, this is a topic with which many researchers may not be familiar. As a result, researchers may not understand how to conduct cross-validation in order to evaluate the replicability of their data. This article not only explains the purpose of cross-validation, but also uses the widely available Holzinger and Swineford (1939) dataset as a heuristic example to concretely demonstrate its use. By incorporating multiple tables and examples of SPSS syntax and output, the reader is provided with additional visual examples in order to further clarify the steps involved in conducting cross-validation. A brief discussion of the limitations of cross-validation is also included. After reading this article, the reader should have a clear understanding of cross-validation, including when it is appropriate to use, and how it can be used to evaluate replicability in regression. 相似文献
4.
In this paper, a new survival cure rate model is introduced considering the Yule–Simon distribution [12] to model the number of concurrent causes. We study some properties of this distribution and the model arising when the distribution of the competing causes is the Weibull model. We call this distribution the Weibull–Yule–Simon distribution. Maximum likelihood estimation is conducted for model parameters. A small scale simulation study is conducted indicating satisfactory parameter recovery by the estimation approach. Results are applied to a real data set (melanoma) illustrating the fact that the model proposed can outperform traditional alternative models in terms of model fitting. 相似文献
5.
Antonello D'Ambra 《统计学通讯:理论与方法》2014,43(6):1209-1221
Non Symmetric Correspondence Analysis (NSCA) (D'Ambra and Lauro, 1989) is a useful technique for analyzing a two-way contingency table. The key difference between the symmetrical and non symmetrical versions of correspondence analysis rests on the measure of the association used to quantify the relationship between the variables. For a two-way, or multi-way, contingency table, the Pearson chi-squared statistic is commonly used when it can be assumed that the categorical variables are symmetrically related. However, for a two-way table, it may be that one variable can be treated as a predictor variable and the second variable can be considered as a response variable. Yet, for such a variable structure, the Pearson chi-squared statistic is not an appropriate measure of the association. Instead, one may consider the Goodman-Kruskal tau index. In the case that there are more than two cross-classified variables, multivariate versions of the Goodman-Kruskal tau index can be considered. These include Marcotorchino's index (Marcotorchino, 1985) and Gray-Williams’ index (Gray and Williams, 1975). In this article, the Multiple non Symmetric Correspondence Analysis (MNSCA), along with the decomposition of the TAU by Gray-Williams in main effects and interaction (D'Ambra et al., 2011), is used for the evaluation of the innovative performance of the manufacturing enterprises in Campania. Finally, to identify a category which is statistically significant, the confidence ellipses have been proposed for the Multiple Non Symmetric Correspondence Analysis starting from the ellipses suggested by Beh (2010) for the symmetrical analysis. 相似文献
6.
The authors derive the analytic expressions for the mean and variance of the log-likelihood ratio for testing equality of k (k ≥ 2) normal populations, and suggest a chi-square approximation and a gamma approximation to the exact null distribution. Numerical comparisons show that the two approximations and the original beta approximation of Neyman and Pearson (1931) are all accurate, and the gamma approximation is the most accurate. 相似文献
7.
Lindeman et al. [12] provide a unique solution to the relative importance of correlated predictors in multiple regression by averaging squared semi-partial correlations obtained for each predictor across all p! orderings. In this paper, we propose a series of predictor sensitivity statistics that complement the variance decomposition procedure advanced by Lindeman et al. [12]. First, we detail the logic of averaging over orderings as a technique of variance partitioning. Second, we assess predictors by conditional dominance analysis, a qualitative procedure designed to overcome defects in the Lindeman et al. [12] variance decomposition solution. Third, we introduce a suite of indices to assess the sensitivity of a predictor to model specification, advancing a series of sensitivity-adjusted contribution statistics that allow for more definite quantification of predictor relevance. Fourth, we describe the analytic efficiency of our proposed technique against the Budescu conditional dominance solution to the uneven contribution of predictors across all p! orderings. 相似文献
8.
《统计学通讯:理论与方法》2013,42(8):1309-1333
ABSTRACT The search for optimal non-parametric estimates of the cumulative distribution and hazard functions under order constraints inspired at least two earlier classic papers in mathematical statistics: those of Kiefer and Wolfowitz[1] and Grenander[2] respectively. In both cases, either the greatest convex minorant or the least concave majorant played a fundamental role. Based on Kiefer and Wolfowitz's work, Wang3-4 found asymptotically minimax estimates of the distribution function F and its cumulative hazard function Λ in the class of all increasing failure rate (IFR) and all increasing failure rate average (IFRA) distributions. In this paper, we will prove limit theorems which extend Wang's asymptotic results to the mixed censorship/truncation model as well as provide some other relevant results. The methods are illustrated on the Channing House data, originally analysed by Hyde.5-6 相似文献
9.
Coppi et al. [7] applied Yang and Wu's [20] idea to propose a possibilistic k-means (PkM) clustering algorithm for LR-type fuzzy numbers. The memberships in the objective function of PkM no longer need to satisfy the constraint in fuzzy k-means that of a data point across classes sum to one. However, the clustering performance of PkM depends on the initializations and weighting exponent. In this paper, we propose a robust clustering method based on a self-updating procedure. The proposed algorithm not only solves the initialization problems but also obtains a good clustering result. Several numerical examples also demonstrate the effectiveness and accuracy of the proposed clustering method, especially the robustness to initial values and noise. Finally, three real fuzzy data sets are used to illustrate the superiority of this proposed algorithm. 相似文献
10.
Sharma (1977) and Aggarwal et al. (2006) considered non circular construction of first- and second-order balanced repeated measurements designs. Sharma et al. (2002) constructed circular first- and second-order balanced repeated measurements designs only for a class with parameters (v, p = 3n, n = v 2) and also showed its universal optimality. In this article, we consider circular construction of first- and second-order balanced repeated measurements designs and strongly balanced repeated measurements designs by using the method of cyclic shifts. Some new circular designs with parameters (v, p, n) for cases p = v, p < v and p > v are given. 相似文献
11.
Shuenn-Ren Cheng 《统计学通讯:理论与方法》2013,42(10):1553-1560
We observe X 1,…,X k , where X i has density f(x,θ i ) possessing monotone likelihood ratio. The best population corresponds to the largest θ i . We select the population corresponding to the largest X i . The goal is to attach the best possible p-value to the inference: the selected population has the uniquely largest θ i . Gutmann and Maymin (1987) considered the location parameter case and derived the supremum of the error probability by conditioning on S, the index of the largest X i . Using this conditioning approach, Kannan and Panchapakesan (2009) considered the problem for the gamma family. We consider here a unified approach to both the location and scale parameter cases, and obtain the supremum of the error probability without using conditioning. 相似文献
12.
J. M. C. Santos Silva 《统计学通讯:理论与方法》2013,42(7):1243-1256
Recently, [1] proposed a dynamic measure based on differential entropy applied to the residual lifetime. This measure has been used for the classification and ordering of survival functions. More recently, [2] has considered the problem of testing the monotonicity of this measure. We propose and study several kernel type estimators of the entropy of residual life through the estimation of f(x) log f(x). These estimators can be applied to the classification and comparison of lifetime distribution. 相似文献
13.
《统计学通讯:理论与方法》2013,42(9):1515-1529
ABSTRACT This paper develops corrected score tests for heteroskedastic t regression models, thus generalizing results by Cordeiro, Ferrari and Paula[1] and Cribari-Neto and Ferrari[2] for normal regression models and by Ferrari and Arellano-Valle[3] for homoskedastic t regression models. We present, in matrix notation, Bartlett-type correction formulae to improve score tests in this class of models. The corrected score statistics have a chi-squared distribution to order n ?1, where n is the sample size. We apply our main result to a few special models and present simulation results comparing the performance of the usual score tests and their corrected versions. 相似文献
14.
《统计学通讯:理论与方法》2013,42(5):931-942
In this paper, we consider the likelihood ratio test for a single model against the mixture of two known densities. We research the accuracy of the limiting null distribution obtained by Titterington 1-2.After that, we improve the limiting null distribution and confirm its usefulness by the simulation study. 相似文献
15.
《统计学通讯:理论与方法》2013,42(11):2227-2244
Abstract Monotone failure rate models [Barlow Richard, E., Marshall, A. W., Proschan, Frank. (1963). Properties of probability distributions with monotone failure rate. Annals of Mathematical Statistics 34:375–389, and Barlow Richard, E., Proschan, Frank. (1965). Mathematical Theory of Reliability. New York: John Wiley & Sons, Barlow Richard, E., Proschan, Frank. (1966a). Tolerance and confidence limits for classes of distributions based on failure rate. Annals of Mathematical Statistics 37(6):1593–1601, Barlow Richard, E., Proschan, Frank. (1966b). Inequalities for linear combinations of order statistics from restricted families. Annals of Mathematical Statistics 37(6):1574–1592, Barlow Richard, E., Proschan, Frank. (1975). Statistical Theory of Reliability and Life Testing. New York: Holt, Rinehart and Winston, Inc.] have become one of the most important models of failure time for reliability practitioners to consider and use. The above authors also developed models and bounds for monotone increasing failure rates (IFR) and for monotone decreasing failure rates (DFR). The IFR models and bounds appear to be especially useful for describing and bounding the hazard of aging. This article extends a new model for time to failure based onthe log odds rate [Zimmer William, J., Wang Yao, Pathak, P. K. (1998). Log-odds rate and monotone log-odds rate distributions. Journal of Quality Technology 30(4):376–385.] which is comparable to the model based on the failure rate. It is shown that in the case of increasing log odds rate (ILOR) in terms of log time (ln t), the model is less stringent than the IFR model for aging. The characterization of distributions of failure time by log odds rate is also derived. It is shown that the logistic distribution has the property of constant log odds rate over time and that the log logistic distribution has the property of constant log odds rate with respect to ln t. Some other properties of ILOR distributions are presented and bounds based on the relationship to the log logistic distribution are provided for distributions which are ILOR with respect to ln t. Motivational examples are provided. The ILOR bounds are compared to the more stringent bounds based on the IFR model. Bounds on system reliability are also provided for certain systems. 相似文献
16.
In Bayesian Inference it is often desirable to have a posterior density reflecting mainly the information from sample data. To achieve this purpose it is important to employ prior densities which add little information to the sample. We have in the literature many such prior densities, for example, Jeffreys (1967), Lindley (1956); (1961), Hartigan (1964), Bernardo (1979), Zellner (1984), Tibshirani (1989), etc. In the present article, we compare the posterior densities of the reliability function by using Jeffreys, the maximal data information (Zellner, 1984), Tibshirani's, and reference priors for the reliability function R(t) in a Weibull distribution. 相似文献
17.
Abstract A generalization of Chauvenet's test (see Bol'shev, L. N. 1969. On tests for rejecting outlying observations. Trudy In-ta prikladnoi Mat. Tblissi Gosudart. univ. 2:159–177. (In Russian); Voinov, V. G., Nikulin, M. N. 1996. Unbaised Estimators and Their Applications. Vol. 2. Kluwer Academic Publishers.) suitable to applied the problem of detecting r outliers in an univariate data set is proposed. In the exponential case, the Chauvenet's test can be used. Various modifications of this test were considered by Bol'shev, Ibrakimov and Khalfina (Ibrakimov, I. A., Khalfina 1978. Some asymptotic results concerning the Chauvenet test. Ter. Veroyatnost. i Primenen. 23(3):593–597.), Greenwood and Nikulin (Greenwood, Nikulin, P. E. 1996. A Guide to Chi-Squared Testing. New York: John Wiley and Sons, Inc.) depending on the choice of the estimation method used: MLE or MVUE. As procedures for testing one outlier in exponential model have been investigated by a number of authors including Chikkagoudar and Kunchur (Chikkagoudar, M. S., Kunchur, S. H. 1983. Distribution of test statistics for multiple outliers in exponential samples. Comm. Stat. Theory. and Meth. 12:2127–2142.), Lewis and Fieller (Lewis, T., Fiellerm N. R. J. 1979. A recursive algorithm for null distribution for outliers : I. Gamma samples. Technometrics 21:371–376.), Likes (Likes, J. 1966. Distribution of Dixon's statistics in the case of an exponential population. Metrika 11:46–54. (91, 96, 136, 198–200, 204, 209, 210).) and Kabe (Kabe, D. G. 1970. Testing outliers from an exponential population. Metrika 15:15–18.); only two types of statistics for testing multiple outliers exist. First is Dixon's while the second is based on the ratio of the sum of the observations suspected to be outliers to the sum of all observations of the sample. In fact, most of these authors have considered a general case of gamma model and the results for exponential model are given a special case. The object of the present communication is to focus on alternative models, namely slippage alternatives (see Barnett, Vic., Toby Lewis 1978. Outlier in Statistical Data. New York: John Wiley and Sons, Inc.) in exponential samples. We propose a statistic different from the well known Dixon's statistic Dr to test for multiple outliers. Distribution of the test based on this new statistic under slippage alternatives is obtained and hence the tables of critical values are given, for various n (size of the sample) and r (the number of outliers). The power of the new test is also calculated, it is compared to the power of the Dixon's statistic (Chikkagoudar, M. S., Kunchur, S. H. 1983. Distribution of test statistics for multiple outliers in exponential samples. Comm. Stat. Theory. and Meth. 12:2127–2142.). Notice that the new statistic based test power is greater the Dixon's statistic based test one. 相似文献
18.
Czesław Ste¸pniak 《统计学通讯:理论与方法》2013,42(13):2405-2412
Canonical form plays a similar role in linear models to spectral decomposition in matrix analysis. Let X = (X 1,…, X n )′ be a random vector with expectation Aβ and the variance–covariance matrix σV, where V is positive definite and let rank(A) = r. Then there exists a nonsingular linear transformation from X to T = (T 1,…, T n )′, such that ET i = η i , for i = 1,…, r and zero for i > r, while cov(T i , T j ) = δ ij σ. This canonical form, introduced by Ko?odziejczyk (1935), was used, among others, by Scheffé (1959) and by Lehmann (1959, 1986). This technique is extended here for arbitrary (possibly singular) V and for simultaneous canonization of two models of this type. 相似文献
19.
Rovshan Aliyev 《统计学通讯:理论与方法》2017,46(5):2571-2579
In the present study, the stochastic process X(t) describing inventory model type of (s, S) with a heavy-tailed distributed demands is considered. The asymptotic expansions at sufficiently large values of parameter β = S ? s for the ergodic distribution and nth-order moment of the process X(t) based on the main results of the studies Teugels (1968) and Geluk and Frenk (2011) are obtained. 相似文献
20.
James R. Schott 《统计学通讯:理论与方法》2017,46(12):6112-6118
The allometric extension model is a multivariate regression model recently proposed by Tarpey and Ivey (2006). This model holds when the matrix of covariances between the variables in the response vector y and the variables in the vector of regressors x has a particular structure. In this paper, we consider tests of hypotheses for this structure when (y′, x′)′ has a multivariate normal distribution. In particular, we investigate the likelihood ratio test and a Wald test. 相似文献