首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
After recalling the framework of minimum-contrast estimation, its consistency and its asymptotic normality, we highlight the fact that these results do not require any stationarity or ergodicity assumptions. The asymptotic distribution of the underlying contrast difference test is a weighted sum of independent chi-square variables having one degree of freedom each. We illustrate these results in three contexts: (1) a nonhomogeneous Markov chain with likelihood contrast; (2) a Markov field with coding, pseudolikelihood or likelihood contrasts; (3) a not necessarily Gaussian time series with Whittle's contrast. In contexts (2) and (3), we compare experimentally the power of the likelihood-ratio test with those of other contrast-difference tests.  相似文献   

2.
We describe a model to obtain strengths and rankings of players appearing in golf's Ryder Cup. Obtaining rankings is complicated because of two reasons. First, competitors do not compete on an equal number of occasions, with some competitors appearing too infrequently for their ranking to be estimated with any degree of certainty, and second, different competitors experience different levels of volatility in results. Our approach is to assume the competitor strengths are drawn from some common distribution. For small numbers of competitors, as is the case here, we fit the model using Monte-Carlo integration. Results suggest there is very little difference between the top performing players, though Scotland's Colin Montgomerie is estimated as the strongest Ryder Cup player.  相似文献   

3.
A plot of each ranking of N objects in N-dimensional space is shown to provide geometric interpretations of Kendall's tau and Spearman's rho and also of the relationship of rho to a sum of inversion weights. The computation of rho from a sum of inversion weights is shown to allow sequential calculation of rho.  相似文献   

4.
In a special paired sample case, Hotelling’s T2 test based on the differences of the paired random vectors is the likelihood ratio test for testing the hypothesis that the paired random vectors have the same mean; with respect to a special group of affine linear transformations it is the uniformly most powerful invariant test for the general alternative of a difference in mean. We present an elementary straightforward proof of this result. The likelihood ratio test for testing the hypothesis that the covariance structure is of the assumed special form is derived and discussed. Applications to real data are given.  相似文献   

5.
Parameter Estimation in Large Dynamic Paired Comparison Experiments   总被引:1,自引:0,他引:1  
Paired comparison data in which the abilities or merits of the objects being compared may be changing over time can be modelled as a non-linear state space model. When the population of objects being compared is large, likelihood-based analyses can be too computationally cumbersome to carry out regularly. This presents a problem for rating populations of chess players and other large groups which often consist of tens of thousands of competitors. This problem is overcome through a computationally simple non-iterative algorithm for fitting a particular dynamic paired comparison model. The algorithm, which improves over the commonly used algorithm of Elo by incorporating the variability in parameter estimates, can be performed regularly even for large populations of competitors. The method is evaluated on simulated data and is applied to ranking the best chess players of all time, and to ranking the top current tennis-players.  相似文献   

6.
We propose a new approach for outlier detection, based on a ranking measure that focuses on the question of whether a point is ‘central’ for its nearest neighbours. Using our notations, a low cumulative rank implies that the point is central. For instance, a point centrally located in a cluster has a relatively low cumulative sum of ranks because it is among the nearest neighbours of its own nearest neighbours, but a point at the periphery of a cluster has a high cumulative sum of ranks because its nearest neighbours are closer to each other than the point. Use of ranks eliminates the problem of density calculation in the neighbourhood of the point and this improves the performance. Our method performs better than several density-based methods on some synthetic data sets as well as on some real data sets.  相似文献   

7.
Blest (2000) proposed a new nonparametric measure of correlation between two random variables. His coefficient, which is dissymmetric in its arguments, emphasizes discrepancies observed among the first ranks in the orderings induced by the variables. The authors derive the limiting distribution of Blest's index and suggest symmetric variants whose merits as statistics for testing independence are explored using asymptotic relative efficiency calculations and Monte Carlo simulations.  相似文献   

8.
The problem of interest is to estimate the home run ability of 12 great major league players. The usual career home run statistics are the total number of home runs hit and the overall rate at which the players hit them. The observed rate provides a point estimate for a player's “true” rate of hitting a home run. However, this point estimate is incomplete in that it ignores sampling errors, it includes seasons where the player has unusually good or poor performances, and it ignores the general pattern of performance of a player over his career. The observed rate statistic also does not distinguish between the peak and career performance of a given player. Given the random effects model of West (1985), one can detect aberrant seasons and estimate parameters of interest by the inspection of various posterior distributions. Posterior moments of interest are easily computed by the application of the Gibbs sampling algorithm (Gelfand and Smith 1990). A player's career performance is modeled using a log-linear model, and peak and career home run measures for the 12 players are estimated.  相似文献   

9.
In many case-control studies, it is common to utilize paired data when treatments are being evaluated. In this article, we propose and examine an efficient distribution-free test to compare two independent samples, where each is based on paired observations. We extend and modify the density-based empirical likelihood ratio test presented by Gurevich and Vexler [7] to formulate an appropriate parametric likelihood ratio test statistic corresponding to the hypothesis of our interest and then to approximate the test statistic nonparametrically. We conduct an extensive Monte Carlo study to evaluate the proposed test. The results of the performed simulation study demonstrate the robustness of the proposed test with respect to values of test parameters. Furthermore, an extensive power analysis via Monte Carlo simulations confirms that the proposed method outperforms the classical and general procedures in most cases related to a wide class of alternatives. An application to a real paired data study illustrates that the proposed test can be efficiently implemented in practice.  相似文献   

10.

Rank aggregation aims at combining rankings of a set of items assigned by a sample of rankers to generate a consensus ranking. A typical solution is to adopt a distance-based approach to minimize the sum of the distances to the observed rankings. However, this simple sum may not be appropriate when the quality of rankers varies. This happens when rankers with different backgrounds may have different cognitive levels of examining the items. In this paper, we develop a new distance-based model by allowing different weights for different rankers. Under this model, the weight associated with a ranker is used to measure his/her cognitive level of ranking of the items, and these weights are unobserved and exponentially distributed. Maximum likelihood method is used for model estimation. Extensions to the cases of incomplete rankings and mixture modeling are also discussed. Empirical applications demonstrate that the proposed model produces better rank aggregation than those generated by Borda and the unweighted distance-based models.

  相似文献   

11.
Abstract

The efficacy and the asymptotic relative efficiency (ARE) of a weighted sum of Kendall's taus, a weighted sum of Spearman's rhos, a weighted sum of Pearson's r's, and a weighted sum of z-transformation of the Fisher–Yates correlation coefficients, in the presence of a blocking variable, are discussed. The method of selecting the weighting constants that maximize the efficacy of these four correlation coefficients is proposed. The estimate, test statistics and confidence interval of the four correlation coefficients with weights are also developed. To compare the small-sample properties of the four tests, a simulation study is performed. The theoretical and simulated results all prefer the weighted sum of the Pearson correlation coefficients with the optimal weights, as well as the weighted sum of z-transformation of the Fisher–Yates correlation coefficients with the optimal weights.  相似文献   

12.
In this paper, we consider the setting where the observed data is incomplete. For the general situation where the number of gaps as well as the number of unobserved values in some gaps go to infinity, the asymptotic behavior of maximum likelihood estimator is not clear. We derive and investigate the asymptotic properties of maximum likelihood estimator under censorship and drive a statistic for testing the null hypothesis that the proposed non-nested models are equally close to the true model against the alternative hypothesis that one model is closer when we are faced with a life-time situation. Furthermore rewrite a normalization of a difference of Akaike criterion for estimating the difference of expected Kullback–Leibler risk between the distributions in two different models.  相似文献   

13.
In this article, the asymmetric n-player gambler's ruin problem is considered, when the players use equal initial fortunes of d dollars and d euros, 1 ≤ d ≤ n + 1. In each round an unfair coin is tossed to decide the currency. The expected ruin time and the individual ruin probabilities are computed. It is proved that the ruin time and which player is ruined are independent events. Finally, some special games are simulated. The simulation results verify the validity of the proposed formulas. As an innovation, the present study makes a combination of the n-player and multi dimensional games which can be viewed as a starting point for future studies.  相似文献   

14.
Performance of maximum likelihood estimators (MLE) of the change-point in normal series is evaluated considering three scenarios where process parameters are assumed to be unknown. Different shifts, sample sizes, and locations of a change-point were tested. A comparison is made with estimators based on cumulative sums and Bartlett's test. Performance analysis done with extensive simulations for normally distributed series showed that the MLEs perform better (or equal) in almost every scenario, with smaller bias and standard error. In addition, robustness of MLE to non-normality is also studied.  相似文献   

15.
Fisher's exact test, difference in proportions, log odds ratio, Pearson's chi-squared, and likelihood ratio are compared as test statistics for testing independence of two dichotomous factors when the associated p values are computed by using the conditional distribution given the marginals. The statistics listed above that can be used for a one-sided alternative give identical p values. For a two-sided alternative, many of the above statistics lead to different p values. The p values are shown to differ only by which tables in the opposite tail from the observed table are considered more extreme than the observed table.  相似文献   

16.
Wald's approximation to the ARL(average run length in cusum) (cumulative sum) procedures are given for an exponential family of densities. From these approximations it is shown that Page's (1954) cusum procedure is (in a sense) identical with a cusum procedure defined in terms of likelihood ratios. Moreover, these approximations are improved by estimating the excess over the boundary and their closeness is examined by numerical comparisons with some exact results. Some examples are also given.  相似文献   

17.
Ranked set sampling is a sampling approach that leads to improved statistical inference in situations where the units to be sampled can be ranked relative to each other prior to formal measurement. This ranking may be done either by subjective judgment or according to an auxiliary variable, and it need not be completely accurate. In fact, results in the literature have shown that no matter how poor the quality of the ranking, procedures based on ranked set sampling tend to be at least as efficient as procedures based on simple random sampling. However, efforts to quantify the gains in efficiency for ranked set sampling procedures have been hampered by a shortage of available models for imperfect rankings. In this paper, we introduce a new class of models for imperfect rankings, and we provide a rigorous proof that essentially any reasonable model for imperfect rankings is a limit of models in this class. We then describe a specific, easily applied method for selecting an appropriate imperfect rankings model from the class.  相似文献   

18.

The problem of comparing several samples to decide whether the means and/or variances are significantly different is considered. It is shown that with very non-normal distributions even a very robust test to compare the means has poor properties when the distributions have different variances, and therefore a new testing scheme is proposed. This starts by using an exact randomization test for any significant difference (in means or variances) between the samples. If a non-significant result is obtained then testing stops. Otherwise, an approximate randomization test for mean differences (but allowing for variance differences) is carried out, together with a bootstrap procedure to assess whether this test is reliable. A randomization version of Levene's test is also carried out for differences in variation between samples. The five possible conclusions are then that (i) there is no evidence of any differences, (ii) evidence for mean differences only, (iii) evidence for variance differences only, (iv) evidence for mean and variance differences, or (v) evidence for some indeterminate differences. A simulation experiment to assess the properties of the proposed scheme is described. From this it is concluded that the scheme is useful as a robust, conservative method for comparing samples in cases where they may be from very non-normal distributions.  相似文献   

19.
Tong ‘1978’ proposed an adaptive approach as an alternative to the classical indifference-zone formulation of the problems of ranking and selection. With a fixed pre-selected y*‘1/k < y* < 1’ his procedure calls for the termination of vector-at-a-time sampling when the estimated probability of a correct selection exceeds Y* for the first time. The purpose of this note is to show that for the case of two normal populations with common known variance, the expected number of vector-observations required by Tong's procedure to terminate sampling approaches infinity as the two population means approach equality for Y* ≥ 0.8413.It is conjectured that this phenomenon also persists if the two largest of K ≥3 population means approach equality. Since in the typical ranking and selection setting it usually is assumed that the experimenter has no knowledge concerning the differences between the population means, the experimenter who uses Tong's procedure clearly does so at his own risk.  相似文献   

20.
We consider the piecewise proportional hazards (PWPH) model with interval-censored (IC) relapse times under the distribution-free set-up. The partial likelihood approach is not applicable for IC data, and the generalized likelihood approach has not been studied in the literature. It turns out that under the PWPH model with IC data, the semi-parametric MLE (SMLE) of the covariate effect under the standard generalized likelihood may not be unique and may not be consistent. In fact, the parameter under the PWPH model with IC data is not identifiable unless the identifiability assumption is imposed. We propose a modification to the likelihood function so that its SMLE is unique. Under the identifiability assumption, our simulation study suggests that the SMLE is consistent. We apply the method to our cancer relapse time data and conclude that the bone marrow micrometastasis does not have a significant prognostic factor.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号