首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Abstract

The company now known as DC began as National Periodicals, publishing anthology series such as Adventure Comics, More Fun Comics, and Detective Comics. Superman, the first true “super-hero,” appeared on the scene in a brief story in Action Comics no. 1 (June, 1938). Batman appeared not long after, in the pages of Detective Comics no. 27 (May 1939)|3-and the world was never the same. In the late 1940s, National absorbed its competitor Ail-American Comics (which published such series as Green Lantern, Aquaman, and Green Arrow) and changed the company's name to Detective Comics, “DC” for short. The merger made DC the largest comic book company until the 1950s, when interest in the medium dried up, and Dell, who at that time published Walt Disney's comic books, took over the top spot.  相似文献   

2.
There are many situations where n objects are ranked by b>2 independent sources or observers and in which the interest is focused on agreement on the top rankings. Kendall's coefficient of concordance [10 M. Kendall and B. Smith, The problem of m rankings, Ann. Math. Stat. 10 (1939), pp. 275287. doi: 10.1214/aoms/1177732186[Crossref] [Google Scholar]] assigns equal weights to all rankings. In this paper, a new coefficient of concordance is introduced which is more sensitive to agreement on the top rankings. The limiting distribution of the new concordance coefficient under the null hypothesis of no association among the rankings is presented, and a summary of the exact and approximate quantiles for this coefficient is provided. A simulation study is carried out to compare the performance of Kendall's, the top-down and the new concordance coefficients in detecting the agreement on the top rankings. Finally, examples are given for illustration purposes, including a real data set from financial market indices.  相似文献   

3.
The purpose of this article is to explain cross-validation and describe its use in regression. Because replicability analyses are not typically employed in studies, this is a topic with which many researchers may not be familiar. As a result, researchers may not understand how to conduct cross-validation in order to evaluate the replicability of their data. This article not only explains the purpose of cross-validation, but also uses the widely available Holzinger and Swineford (1939 Holzinger, K.J., Swineford, F. (1939). A Study in Factor Analysis: The Stability of a Bi-Factor Solution. Chicago, IL: University of Chicago. Available at: http://people.cehd.tamu.edu/~bthompson/datasets.htm [Google Scholar]) dataset as a heuristic example to concretely demonstrate its use. By incorporating multiple tables and examples of SPSS syntax and output, the reader is provided with additional visual examples in order to further clarify the steps involved in conducting cross-validation. A brief discussion of the limitations of cross-validation is also included. After reading this article, the reader should have a clear understanding of cross-validation, including when it is appropriate to use, and how it can be used to evaluate replicability in regression.  相似文献   

4.
Non Symmetric Correspondence Analysis (NSCA) (D'Ambra and Lauro, 1989 D'Ambra , L. , Lauro , N. ( 1989 ). Non symmetrical analysis of three way contingency tables . In: Multiway Data Analysis , Coppi , R. , Bolasco , S. , Eds., North Holland , Amsterdam : pp. 301315 . [Google Scholar]) is a useful technique for analyzing a two-way contingency table.

The key difference between the symmetrical and non symmetrical versions of correspondence analysis rests on the measure of the association used to quantify the relationship between the variables. For a two-way, or multi-way, contingency table, the Pearson chi-squared statistic is commonly used when it can be assumed that the categorical variables are symmetrically related. However, for a two-way table, it may be that one variable can be treated as a predictor variable and the second variable can be considered as a response variable.

Yet, for such a variable structure, the Pearson chi-squared statistic is not an appropriate measure of the association. Instead, one may consider the Goodman-Kruskal tau index. In the case that there are more than two cross-classified variables, multivariate versions of the Goodman-Kruskal tau index can be considered. These include Marcotorchino's index (Marcotorchino, 1985) and Gray-Williams’ index (Gray and Williams, 1975 Gray , L. N. , Williams , J. S. ( 1975 ). Goodman and Kruskals Tau B: Multiple and partial analogy. Amer. Statist. Assoc. Proc. Soc. Statist. Sec. pp. 444448 . [Google Scholar]).

In this article, the Multiple non Symmetric Correspondence Analysis (MNSCA), along with the decomposition of the TAU by Gray-Williams in main effects and interaction (D'Ambra et al., 2011 D'Ambra , L. , D'Ambra , A. , Sarnacchiaro , P. ( 2011 ). Visualising main effects and interaction term in multiple non symmetric correspondence analysis. Submitted.  [Google Scholar]), is used for the evaluation of the innovative performance of the manufacturing enterprises in Campania.

Finally, to identify a category which is statistically significant, the confidence ellipses have been proposed for the Multiple Non Symmetric Correspondence Analysis starting from the ellipses suggested by Beh (2010 Beh , E. J. ( 2010 ). Elliptical confidence regions for simple correspondence analysis . J. Statisti. Plann. Infer. [Web of Science ®] [Google Scholar]) for the symmetrical analysis.  相似文献   

5.
Recently, Di Crescenzo and Longobardi (2006 Di Crescenzo, A., Longobardi, M. (2006). On weighted residual and past entropies. Sci. Math. Jpn. 64:255266. [Google Scholar]) have studied “length-biased” shift-dependent information measure and its dynamic versions. On the other hand, Renyi's entropy plays a vital role in the literature of information theory that is a generalization of Shannon's entropy. In this article, the concepts of weighted Renyi's entropy, weighted residual Renyi's entropy, and weighted past Renyi's entropy are introduced and their properties are discussed.  相似文献   

6.
Johnson (1970 Johnson , R. ( 1970 ). Asymptotic expansions associated with posterior distributions . Ann. Math. Statist. 41 : 851864 .[Crossref] [Google Scholar]) obtained expansions for marginal posterior distributions through Taylor expansions. Here, the posterior expansion is expressed in terms of the likelihood and the prior together with their derivatives. Recently, Weng (2010 Weng , R. C. ( 2010 ). A Bayesian Edgeworth expansion by Stein's Identity . Bayesian Anal. 5 ( 4 ): 741764 .[Crossref], [Web of Science ®] [Google Scholar]) used a version of Stein's identity to derive a Bayesian Edgeworth expansion, expressed by posterior moments. Since the pivots used in these two articles are the same, it is of interest to compare these two expansions.

We found that our O(t ?1/2) term agrees with Johnson's arithmetically, but the O(t ?1) term does not. The simulations confirmed this finding and revealed that our O(t ?1) term gives better performance than Johnson's.  相似文献   

7.
Since Rao introduced the Quadratic Entropy (QE) in 1982, results on mathematical and statistical properties of the QE and its applications in data analysis and population indices have been published in the literature. In this paper, we study the asymptotic efficiency of the analysis of Rao's quadratic entropy (ANOQE) which is a generalization of the classical analysis of variance (ANOVA). Based on the results of Liu and Rao [1] Liu, Z. J. and Rao, C. R. 1995. Asymptotic distribution of statistics based on quadratic entropy and bootstrapping. JSPI, 43: 118.  [Google Scholar]and Liu [2] Liu, Z. J. 1991. Bootstrapping one way analysis of Rao's quadratic entropy. Comm. Statist., 20: 16831702.  [Google Scholar]on asymptotic distribution and the bootstrap of the ANOQE, we derive the Bahadur's asymptotic efficiency of the ANOQE and compare efficiency of ANOQE tests based on different QE's.  相似文献   

8.
Walsh (1995 Walsh , D. P. ( 1995 ). Equating Poisson and normal probability functions to derive Stirling's formula . Amer. Statist. 49 : 270271 .[Taylor & Francis Online] [Google Scholar]) introduced a heuristic approach to motivate Stirling's formula by equating a Poisson probability to an analogous value from a normal density function. We explore similar heuristics to derive approximations for various binomial, negative binomial, and multinomial coefficients. Also, using heuristics markedly different from those of Walsh, we develop an approximation of (nk)! for positive integers n (large) and k. These heuristics are then used to validate Stirling's formula for Γ(nα) where α is a positive real number. To derive each of our approximations we use a different probability distribution, and hence each section may serve as pedagogical module.  相似文献   

9.
The density level sets of the two types of measures under consideration are l 2, p -circles with p = 1 and p = 2, respectively. The intersection-percentage function (ipf) of such a measure reflects the percentages which the level set corresponding to the p-radius r shares for each r > 0 with a set to be measured. The geometric measure representation formulae in Richter (2009 Richter , W.-D. (2009). Continuous l n, p -symmetric distributions. Lithuanian Mathemat. J. 49:93108.[Crossref], [Web of Science ®] [Google Scholar]) is based upon these ipf's and will be used here for evaluating exact cdf's and pdf's for the linear combination, the product, and the ratio of the components of two-dimensional simplicial or spherically distributed random vectors.  相似文献   

10.
The authors derive the analytic expressions for the mean and variance of the log-likelihood ratio for testing equality of k (k ≥ 2) normal populations, and suggest a chi-square approximation and a gamma approximation to the exact null distribution. Numerical comparisons show that the two approximations and the original beta approximation of Neyman and Pearson (1931 Neyman , J. , Pearson , E. S. ( 1931 ). On the problem of k samples . In: Neyman , J. , Pearson , E. S. , eds. Joint Statistical Papers . Cambridge : Cambridge University Press , pp. 116131 . [Google Scholar]) are all accurate, and the gamma approximation is the most accurate.  相似文献   

11.
When two random variables have a bivariate normal distribution, Stein's lemma (Stein, 1973 Stein , C. M. ( 1973 ). Estimation of the mean of a multivariate normal distribution . Proc. Prague Symp. Asymptotic Statist. 345381 . [Google Scholar] 1981 Stein , C. M. ( 1981 ). Estimation of the mean of a multivariate normal distribution . Ann. Statist. 9 : 11351151 .[Crossref], [Web of Science ®] [Google Scholar]), provides, under certain regularity conditions, an expression for the covariance of the first variable with a function of the second. An extension of the lemma due to Liu (1994 Liu , J. S. ( 1994 ). Siegel's formula via Stein's identities . Statist. Probab. Lett. 21 : 247251 .[Crossref], [Web of Science ®] [Google Scholar]) as well as to Stein himself establishes an analogous result for a vector of variables which has a multivariate normal distribution. The extension leads in turn to a generalization of Siegel's (1993 Siegel , A. F. ( 1993 ). A surprising covariance involving the minimum of multivariate normal variables . J. Amer. Statist. Assoc. 88 : 7780 .[Taylor & Francis Online], [Web of Science ®] [Google Scholar]) formula for the covariance of an arbitrary element of a multivariate normal vector with its minimum element. This article describes extensions to Stein's lemma for the case when the vector of random variables has a multivariate skew-normal distribution. The corollaries to the main result include an extension to Siegel's formula. This article was motivated originally by the issue of portfolio selection in finance. Under multivariate normality, the implication of Stein's lemma is that all rational investors will select a portfolio which lies on Markowitz's mean-variance efficient frontier. A consequence of the extension to Stein's lemma is that under multivariate skew-normality, rational investors will select a portfolio which lies on a single mean-variance-skewness efficient hyper-surface.  相似文献   

12.
Using Zieliński's (1977 Zieliński , R. ( 1977 ). Robustness: a quantitative approach . Bull. Acad. Polon. Sci., Ser. Sci. Math. Astronom. Phys. 25 : 12811286 . [Google Scholar]) concept of robustness, B?a?ej (2007 B?a?ej , P. ( 2007 ). Robust estimation of the scale and weighted distributions . Appl. Math. (Warsaw) 34 : 3945 . [Google Scholar]) obtained the uniformly most bias-robust estimates (UMBREs) of the scale parameter for some statistical models, in a class of linear functions of order statistics. Violations of the models are generated by weight functions. In this article the UMBRE of the scale parameter, based on order statistics, a more general weighted model is derived. Extension of a result of B?a?ej (2007 B?a?ej , P. ( 2007 ). Robust estimation of the scale and weighted distributions . Appl. Math. (Warsaw) 34 : 3945 . [Google Scholar]) is given.  相似文献   

13.
Lindeman et al. [12 Lindeman, R. H., Merenda, P. F. and Gold, R. Z. 1980. Introduction to Bivariate and Multivariate Analysis, Glenview, IL: Scott Foresman.  [Google Scholar]] provide a unique solution to the relative importance of correlated predictors in multiple regression by averaging squared semi-partial correlations obtained for each predictor across all p! orderings. In this paper, we propose a series of predictor sensitivity statistics that complement the variance decomposition procedure advanced by Lindeman et al. [12 Lindeman, R. H., Merenda, P. F. and Gold, R. Z. 1980. Introduction to Bivariate and Multivariate Analysis, Glenview, IL: Scott Foresman.  [Google Scholar]]. First, we detail the logic of averaging over orderings as a technique of variance partitioning. Second, we assess predictors by conditional dominance analysis, a qualitative procedure designed to overcome defects in the Lindeman et al. [12 Lindeman, R. H., Merenda, P. F. and Gold, R. Z. 1980. Introduction to Bivariate and Multivariate Analysis, Glenview, IL: Scott Foresman.  [Google Scholar]] variance decomposition solution. Third, we introduce a suite of indices to assess the sensitivity of a predictor to model specification, advancing a series of sensitivity-adjusted contribution statistics that allow for more definite quantification of predictor relevance. Fourth, we describe the analytic efficiency of our proposed technique against the Budescu conditional dominance solution to the uneven contribution of predictors across all p! orderings.  相似文献   

14.
ABSTRACT

The search for optimal non-parametric estimates of the cumulative distribution and hazard functions under order constraints inspired at least two earlier classic papers in mathematical statistics: those of Kiefer and Wolfowitz[1] Kiefer, J. and Wolfowitz, J. 1976. Asymptotically Minimax Estimation of Concave and Convex Distribution Functions. Z. Wahrsch. Verw. Gebiete, 34: 7385. [Crossref], [Web of Science ®] [Google Scholar] and Grenander[2] Grenander, U. 1956. On the Theory of Mortality Measurement. Part II. Scand. Aktuarietidskrift J., 39: 125153.  [Google Scholar] respectively. In both cases, either the greatest convex minorant or the least concave majorant played a fundamental role. Based on Kiefer and Wolfowitz's work, Wang3-4 Wang, J.L. 1986. Asymptotically Minimax Estimators for Distributions with Increasing Failure Rate. Ann. Statist., 14: 11131131. Wang, J.L. 1987. Estimators of a Distribution Function with Increasing Failure Rate Average. J. Statist. Plann. Inference, 16: 415427.   found asymptotically minimax estimates of the distribution function F and its cumulative hazard function Λ in the class of all increasing failure rate (IFR) and all increasing failure rate average (IFRA) distributions. In this paper, we will prove limit theorems which extend Wang's asymptotic results to the mixed censorship/truncation model as well as provide some other relevant results. The methods are illustrated on the Channing House data, originally analysed by Hyde.5-6 Hyde, J. 1977. Testing Survival Under Right Censoring and Left Truncation. Biometrika, 64: 225230. Hyde, J. 1980. “Survival Analysis with Incomplete Observations”. In Biostatistics Casebook 3146. New York: Wiley Series in Probability and Mathematical Statistics: Applied Probability and Statistics.    相似文献   

15.
Keith Knight 《Econometric Reviews》2016,35(8-10):1471-1484
In a linear regression model, the Dantzig selector (Candès and Tao, 2007 Candès, E., Tao, T. (2007). The Dantzig selector: Statistical estimation when p is much larger than n. Annals of Statistics 35:23132351.[Crossref], [Web of Science ®] [Google Scholar]) minimizes the L1 norm of the regression coefficients subject to a bound λ on the L norm of the covariances between the predictors and the residuals; the resulting estimator is the solution of a linear program, which may be nonunique or unstable. We propose a regularized alternative to the Dantzig selector. These estimators (which depend on λ and an additional tuning parameter r) minimize objective functions that are the sum of the L1 norm of the regression coefficients plus r times the logarithmic potential function of the Dantzig selector constraints, and can be viewed as penalized analytic centers of the latter constraints. The tuning parameter r controls the smoothness of the estimators as functions of λ and, when λ is sufficiently large, the estimators depend approximately on r and λ via r/λ2.  相似文献   

16.
Coppi et al. [7 R. Coppi, P. D'Urso, and P. Giordani, Fuzzy and possibilistic clustering for fuzzy data, Comput. Stat. Data Anal. 56 (2012), pp. 915927. doi: 10.1016/j.csda.2010.09.013[Crossref], [Web of Science ®] [Google Scholar]] applied Yang and Wu's [20 M.-S. Yang and K.-L. Wu, Unsupervised possibilistic clustering, Pattern Recognit. 30 (2006), pp. 521. doi: 10.1016/j.patcog.2005.07.005[Crossref], [Web of Science ®] [Google Scholar]] idea to propose a possibilistic k-means (PkM) clustering algorithm for LR-type fuzzy numbers. The memberships in the objective function of PkM no longer need to satisfy the constraint in fuzzy k-means that of a data point across classes sum to one. However, the clustering performance of PkM depends on the initializations and weighting exponent. In this paper, we propose a robust clustering method based on a self-updating procedure. The proposed algorithm not only solves the initialization problems but also obtains a good clustering result. Several numerical examples also demonstrate the effectiveness and accuracy of the proposed clustering method, especially the robustness to initial values and noise. Finally, three real fuzzy data sets are used to illustrate the superiority of this proposed algorithm.  相似文献   

17.
Sharma (1977 Sharma , V. K. ( 1977 ). Change-over designs with complete balance for first and second order residual effect . Canad. J. Statist. 5 : 121132 .[Crossref] [Google Scholar]) and Aggarwal et al. (2006 Aggarwal , M. L. , Deng , L.-Y. , Jha , M. K. ( 2006 ). Balanced residual treatment effects designs of first and second order . Statist. Probab. Lett. 76 : 597600 .[Crossref], [Web of Science ®] [Google Scholar]) considered non circular construction of first- and second-order balanced repeated measurements designs. Sharma et al. (2002 Sharma , V. K. , Varghese , C. , Jaggi , S. ( 2002 ). On optimality of change-over designs balanced for first and second order residual effects . Metron 60 : 153162 . [Google Scholar]) constructed circular first- and second-order balanced repeated measurements designs only for a class with parameters (v, p = 3n, n = v 2) and also showed its universal optimality. In this article, we consider circular construction of first- and second-order balanced repeated measurements designs and strongly balanced repeated measurements designs by using the method of cyclic shifts. Some new circular designs with parameters (v, p, n) for cases p = v, p < v and p > v are given.  相似文献   

18.
This paper studies the allocations of two non identical active redundancies in series systems in terms of the reversed hazard rate order and hazard rate order, which generalizes some results built in Valdés and Zequeira (2003 Valdés, J. E., and R. I. Zequeira 2003. On the optimal allocation of an active redundancy in a two-component series system. Stat. Probab. Lett. 63:32532.[Crossref], [Web of Science ®] [Google Scholar], 2006 Valdés, J. E., and R. I. Zequeira 2006. On the optimal allocation of two active redundancies in a two-component series system. Oper. Res. Lett. 34:4952.[Crossref], [Web of Science ®] [Google Scholar]).  相似文献   

19.
In this article, we investigate the asymptotic normality of the Hill's estimator of the tail index parameter, when the observations are weakly dependent in the sense of Doukhan and Louhichi (1999 Doukhan, P., Louhichi, S. (1999). A new weak dependence condition and applications to moment inequalities. Stochastic Process. Appl. 84:313342.[Crossref], [Web of Science ®] [Google Scholar]) and are drawn from a strictly linear process. We show that the previous result on Hill estimator obtained by Rootzen et al. (1990 Rootzen, H., Leadbetter, M., De Haan, L. (1990). Tail and quantile estimation for strongly mixing stationary sequences. Technical report. No. 292, Center for Stochastic Processes, Department of Statistics, University of North Carolina, Chapel Hill. [Google Scholar]) and Resnick and Starica (1997 Resnick, S., Starica, C. (1997). Asymptotic behavior of Hill's estimator for autoregressive data. Commun. Statistics-stochastic Models 13:703723.[Taylor &; Francis Online] [Google Scholar]) for strong mixing can be extended to weak dependence.  相似文献   

20.
This article considers Bayesian p-values for testing independence in 2 × 2 contingency tables with cell counts observed from the two independent binomial sampling scheme and the multinomial sampling scheme. From the frequentist perspective, Fisher's p-value (p F ) is the most commonly used p-value but it can be conservative for small to moderate sample sizes. On the other hand, from the Bayesian perspective, Bayarri and Berger (2000 Bayarri , M. J. , Berger , J. O. ( 2000 ). P-values for composite null models (with discussion) . J. Amer. Statist. Assoc. 95 : 11271170 .[Taylor & Francis Online], [Web of Science ®] [Google Scholar]) first proposed the partial posterior predictive p-value (p PPOST ), which can avoid the double use of the data that occurs in another Bayesian p-value proposed by Guttman (1967 Guttman , I. ( 1967 ). The use of the concept of a future observation in goodness-of-fit problems . J. Roy. Statist. Soc. Ser. B 29 : 83100 . [Google Scholar]) and Rubin (1984 Rubin , D. B. ( 1984 ). Bayesianly justifiable and relevant frequency calculations for the applied statistician . Ann. Statist. 12 : 11511172 .[Crossref], [Web of Science ®] [Google Scholar]), called the posterior predictive p-value (p POST ). The subjective and objective Bayesian p-values in terms of p POST and p PPOST are derived under the beta prior and the (noninformative) Jeffreys prior, respectively. Numerical comparisons among p F , p POST , and p PPOST reveal that p PPOST performs much better than p F and p POST for small to moderate sample sizes from the frequentist perspective.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号