首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
In this paper, we study the efficacy of the official ranking for international football teams compiled by FIFA, the body governing football competition around the globe. We present strategies for improving a team's position in the ranking. By combining several statistical techniques, we derive an objective function in a decision problem of optimal scheduling of future matches. The presented results display how a team's position can be improved. Along the way, we compare the official procedure to the famous Elo rating system. Although it originates from chess, it has been successfully tailored to ranking football teams as well.  相似文献   

2.
In a previous paper, it was demonstrated that distinctly different prediction methods when applied to 2435 American college and professional football games resulted in essentially the same fraction of correct selections of the winning team and essentially the same average absolute error for predicting the margin of victory. These results are now extended to 1446 Australian rules football games. Two distinctly different prediction methods are applied. A least-squares method provides a set of ratings. The predicted margin of victory in the next contest is less than the rating difference, corrected for home-ground advantage, while a 0.75 power method shrinks the ratings compared with those found by the least-squares technique and then performs predictions based on the rating difference and home-ground advantage. Both methods operate upon past margins of victory corrected for home advantage to obtain the ratings. It is shown that both methods perform similarly, based on the fraction of correct selections of the winning team and the average absolute error for predicting the margin of victory. That is, differing predictors using the same information tend to converge to a limiting level of accuracy. The least-squares approach also provides estimates of the accuracy of each prediction. The home advantage is evaluated for all teams collectively and also for individual teams. The data permit comparisons with other sports in other countries. The home team appears to have an advantage (the visiting team has a disadvantage) due to three factors:the visiting team suffers from travel fatigue; crowd intimidation by the home team fans; lack of familiarity with the playing conditions  相似文献   

3.
This paper presents an analysis of the eff ect of various baseball play-off configurations on the probability of advancing to the World Series. Play-off games are assumed to be independent. Several paired comparisons models are considered for modeling the probability of a home team winning a single game as a function of the winning percentages of the contestants over the course of the season. The uniform and logistic regression models are both adequate, whereas the Bradley-Terry model (modified for within-pair order eff ects, i.e. the home field advantage) is not. The single-game probabilities are then used to compute the probability of winning the play-off s under various structures. The extra round of play-off s, instituted in 1994, significantly lowers the probability of the team with the best record advancing to the World Series, whereas home field advantage and the diff erent possible play-offdraws have a minimal eff ect.  相似文献   

4.
This paper examines team performance in the NBA over the last five decades. It was motivated by two previous observational studies, one of which studied the winning percentages of professional baseball teams over time, while the other examined individual player performance in the NBA. These studies considered professional sports as evolving systems, a view proposed by evolutionary biologist Stephen Jay Gould, who wrote extensively on the disappearance of .400 hitters in baseball. Gould argued that the disappearance is actually a sign of improvement in the quality of play, reflected in the reduction of variability in hitting performance. The previous studies reached similar conclusions in terms of winning percentages of baseball teams, and performance of individual players in basketball. This paper uses multivariate measures of team performance in the NBA to see if similar characteristics of evolution can be observed. The conclusion does not appear to be clearly affirmative, as in previous studies, and possible reasons for this are discussed.  相似文献   

5.
A new class of distributions, including the MacGillivray adaptation of the g-and-h distributions and a new family called the g-and-k distributions, may be used to approximate a wide class of distributions, with the advantage of effectively controlling skewness and kurtosis through independent parameters. This separation can be used to advantage in the assessment of robustness to non-normality in frequentist ranking and selection rules. We consider the rule of selecting the largest of several means with some specified confidence. In general, we find that the frequentist selection rule is only robust to small changes in the distributional shape parameters g and k and depends on the amount of flexibility we allow in the specified confidence. This flexibility is exemplified through a quality control example in which a subset of batches of electrical transformers are selected as the most efficient with a specified confidence, based on the sample mean performance level for each batch.  相似文献   

6.
Applications of maximum likelihood techniques to rank competitors in sports are commonly based on the assumption that each competitor's performance is a function of a deterministic component that represents inherent ability and a stochastic component that the competitor has limited control over. Perhaps based on an appeal to the central limit theorem, the stochastic component of performance has often been assumed to be a normal random variable. However, in the context of a racing sport, this assumption is problematic because the resulting model is the computationally difficult rank-ordered probit. Although a rank-ordered logit is a viable alternative, a Thurstonian paired-comparison model could also be applied. The purpose of this analysis was to compare the performance of the rank-ordered logit and Thurstonian paired-comparison models given the objective of ranking competitors based on ability. Monte Carlo simulations were used to generate race results based on a known ranking of competitors, assign rankings from the results of the two models, and judge performance based on Spearman's rank correlation coefficient. Results suggest that in many applications, a Thurstonian model can outperform a rank-ordered logit if each competitor's performance is normally distributed.  相似文献   

7.
The application of data mining techniques and statistical analysis to the sports field has received increasing attention in the last decade. One of the most famous sports in the world is soccer, and the present work deals with it, using data from the 2009/2010 season to the 2015/2016 season from nine European leagues extracted from the Kaggle European Soccer database. Overall performance indicators of the four roles in a soccer team (forward, midfielder, defender and goalkeeper) for home and away teams are used to investigate the relationships between them and the results of matches, and to predict the wins of the home team. The model used to answer both these demands is the Bayesian Network. This study shows that this model can be very useful for mining the relations between players'' performance indicators and for improving knowledge of the game strategies applied by coaches in different leagues. Moreover, it is shown that the ability to predict match results of the proposed Bayesian Network is roughly the same as that of the Naive Bayes model.  相似文献   

8.
This paper uses a new bivariate negative binomial distribution to model scores in the 1996 Australian Rugby League competition. First, scores are modelled using the home ground advantage but ignoring the actual teams playing. Then a bivariate negative binomial regression model is introduced that takes into account the offensive and defensive capacities of each team. Finally, the 1996 season is simulated using the latter model to determine whether or not Manly did indeed deserve to win the competition.  相似文献   

9.
Summary.  Multiple linear regression techniques are applied to determine the relative batting and bowling strengths and a common home advantage for teams playing both innings of international one-day cricket and the first innings of a test-match. It is established that in both forms of the game Australia and South Africa were rated substantially above the other teams. It is also shown that home teams generally enjoyed a significant advantage. Using the relative batting and bowling strengths of teams, together with parameters that are associated with common home advantage, winning the toss and the establishment of a first-innings lead, multinomial logistic regression techniques are applied to explore further how these factors critically affect outcomes of test-matches. It is established that in test cricket a team's first-innings batting and bowling strength, first-innings lead, batting order and home advantage are strong predictors of a winning match outcome. Contrary to popular opinion, it is found that the team batting second in a test enjoys a significant advantage. Notably, the relative superiority of teams during the fourth innings of a test-match, but not the third innings, is a strong predictor of a winning outcome. There is no evidence to suggest that teams generally gained a winning advantage as a result of winning the toss.  相似文献   

10.
ABSTRACT

Robust parameter design, known as Taguchi’s design of experiments, are statistical optimization procedures designed to improve the quality of the functionality or quality characteristics of products or processes. In this article, we introduce a new performance measure based on asymmetric power loss functions for positive variables and discuss its applications to robust parameter design.  相似文献   

11.
The problem of interest is to estimate the home run ability of 12 great major league players. The usual career home run statistics are the total number of home runs hit and the overall rate at which the players hit them. The observed rate provides a point estimate for a player's “true” rate of hitting a home run. However, this point estimate is incomplete in that it ignores sampling errors, it includes seasons where the player has unusually good or poor performances, and it ignores the general pattern of performance of a player over his career. The observed rate statistic also does not distinguish between the peak and career performance of a given player. Given the random effects model of West (1985), one can detect aberrant seasons and estimate parameters of interest by the inspection of various posterior distributions. Posterior moments of interest are easily computed by the application of the Gibbs sampling algorithm (Gelfand and Smith 1990). A player's career performance is modeled using a log-linear model, and peak and career home run measures for the 12 players are estimated.  相似文献   

12.
We suggest a procedure to improve the overall performances of several existing methods for determining the number of factors in factor analysis by using alternative measures of correlation: Pearson's, Spearman's, Gini's, and a robust estimator of the covariance matrix (MCD). We examine the effect of the choice of the covariance used on the number of factors chosen by the KG rule of one, the 80% rule, the Minimum average partial (MAP), and the Parallel Analysis Methodology (PAM). Extensive simulations show that when the entire (or part) of the data come from heavy-tail (lognormal) distributions, ranking the variables which come from non symmetric distributions improves the performances of the methods. In this case, Gini is slightly better than Spearman. The PAM and MAP procedures are qualitatively superior to the KG and the 80% rules in determining the true number of factors. A real example involving data on document authorship is analyzed.  相似文献   

13.
Summary.  The paper presents a statistical analysis of patterns in the incidence of disciplinary sanction (yellow and red cards) that were taken against players in the English Premier League over the period 1996–2003. Several questions concerning sources of inconsistency and bias in refereeing standards are examined. Evidence is found to support a time consistency hypothesis, that the average incidence of disciplinary sanction is predominantly stable over time. However, a refereeing consistency hypothesis, that the incidence of disciplinary sanction does not vary between referees, is rejected. The tendency for away teams to incur more disciplinary points than home teams cannot be attributed to the home advantage effect on match results and appears to be due to a refereeing bias favouring the home team.  相似文献   

14.
In this article, we propose a general framework for performance evaluation of organizations and individuals over time using routinely collected performance variables or indicators. Such variables or indicators are often correlated over time, with missing observations, and often come from heavy-tailed distributions shaped by outliers. Two new double robust and model-free strategies are used for evaluation (ranking) of sampling units. Strategy 1 can handle missing data using residual maximum likelihood (RML) at stage two, while strategy two handles missing data at stage one. Strategy 2 has the advantage that overcomes the problem of multicollinearity. Strategy one requires independent indicators for the construction of the distances, where strategy two does not. Two different domain examples are used to illustrate the application of the two strategies. Example one considers performance monitoring of gynecologists and example two considers the performance of industrial firms.  相似文献   

15.
Measurement error and autocorrelation often exist in quality control applications. Both have an adverse effect on the chart's performance. To counteract the undesired effect of autocorrelation, we build-up the samples with non-neighbouring items, according to the time they were produced. To counteract the undesired effect of measurement error, we measure the quality characteristic of each item of the sample several times. The chart's performance is assessed when multiple measurements are applied and the samples are built by taking one item from the production line and skipping one, two or more before selecting the next.  相似文献   

16.
During the summer of 2013, a joint team involving Duke University Libraries and IBM spent three months configuring IBM Business Process Manager to improve electronic resources workflows in the Libraries’ Technical Services department. The resulting workflow showcases the application's ability to transform the management of online databases. This article will provide an overview of the “before” and “after” database workflow, with a demo of the new system and its integration points with other tools.  相似文献   

17.
Acceptance sampling plans based on process yield indices provide a proven resource for the lot-sentencing problem when the required fraction defective is very low. In this study, a new sampling plan based on the exponentially weighted moving average (EWMA) model with yield index for lot sentencing for autocorrelation between polynomial profiles is proposed. The advantage of the EWMA statistic is the accumulation of quality history from previous lots. In addition, the number of profiles required for lot sentencing is more economical than in the traditional single sampling plan. Considering the acceptable quality level (AQL) at the producer's risk and the lot tolerance percent defective (LTPD) at the consumer's risk, we proposed a new search algorithm to determine the optimal plan parameters. The plan parameters are tabulated for various combinations of the smoothing constant of the EWMA statistic, AQL, LTPD, and two risks. A comparison study and two numerical examples are provided to show the applicability of the proposed sampling plan.  相似文献   

18.
Acceptance sampling, widely used in various production industries, is a very vital tool of quality control. In this paper, a new attribute acceptance-sampling plan is developed based on the exponentially weighted moving average statistic under a time-truncated life test when the product lifetime follows the Weibull distribution or the Burr type X distribution. The performance measures such as the probability of acceptance and the average sample number are derived. Tables are constructed for the selection of optimal parameters of the proposed sampling plan so as to minimize the average sample number satisfying the producer's and the consumer's risks. Illustrative example is also given for the application of the proposed plan. It is also shown that the proposed plan requires a smaller sample size compared to the single sampling plan.  相似文献   

19.
The paper examines to what extent a player's market value depends on his skills. Therefore, a data set covering 28 performance measures and the market values of about 493 players from 1. and 2. German Bundesliga is analysed. Applying robust analysis techniques, we are able to robustly estimate market values of soccer players. The results show (1) that there are significantly underrated and overrated players and (2) that a player's affiliation to a certain team may contribute to his market value. We conclude that a club's reputation affects the market values of its players and that star players are in tendency overrated.  相似文献   

20.
A right-censored ranking is what results when a judge ranks only the “top K” of M objects. Complete uncensored rankings constitute a special case. We present two measures of concordance among the rankings of N ≥ 2 such judges, both based on Spearman's footrule. One measure is unweighted, while the other gives greatest weight to the first rank, less to the second, and so on. We consider methods for calculating or estimating the P-values of the corresponding tests of the hypothesis of random ranking.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号