首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 458 毫秒
1.
The elimination or knockout format is one of the most common designs for pairing competitors in tournaments and leagues. In each round of a knockout tournament, the losers are eliminated while the winners advance to the next round. Typically, the goal of such a design is to identify the overall best player. Using a common probability model for expressing relative player strengths, we develop an adaptive approach to pairing players each round in which the probability that the best player advances to the next round is maximized. We evaluate our method using simulated game outcomes under several data-generating mechanisms, and compare it to random pairings, to the standard knockout format, and to two variants of the standard format.  相似文献   

2.
In a seeded knockout tournament, where teams have some preassigned strength, do we have any assurances that the best team in fact has won? Is there some insight to be gained by considering which teams beat which other teams solely examining the seeds? We pose an answer to these questions by using the difference in the seeds of the two players as the basis for a test statistic. We offer several models for the underlying probability structure to examine the null distribution and power functions and determine these for small tournaments (less than five teams). One structure each for 8 teams and 16 teams is examined, and we conjecture an asymptotic normal distribution for the test statistic.  相似文献   

3.
A system for calculating relative playing strengths of tiddlywinks players is described. The method can also be used for other sports. It is specifically designed to handle cases where the number of games played in a season varies greatly between players, and thus the confidence that one can have in an assigned rating also varies greatly between players. In addition, the method is designed to handle situations in which some games in the tournament are played as individuals ("singles'), while others are played with a partner ("pairs'). These factors make application of some statistical treatments, such as the Elo rating system used in chess, difficult to apply. The new method characterizes each player's ability by a numerical rating together with an associated uncertainty in that player's rating. After each tournament, a "tournament rating' is calculated for each player based on how many points the player achieved and the relative strength of partner(s) and opponent(s). Statistical analysis is then used to estimate the likely error in the calculated tournament rating. Both the tournament rating and its estimated error are used in the calculation of new ratings. The method has been applied to calculate tiddlywinks world ratings based on over 13 r 000 national tournament games in Britain and the USA going back to 1985.  相似文献   

4.
We deal with a random graph model where at each step, a vertex is chosen uniformly at random, and it is either duplicated or its edges are deleted. Duplication has a given probability. We analyze the limit distribution of the degree of a fixed vertex and derive a.s. asymptotic bounds for the maximal degree. The model shows a phase transition phenomenon with respect to the probabilities of duplication and deletion.  相似文献   

5.
An important aspect of paired comparison experiments is the decision of how to form pairs in advance of collecting data. A weakness of typical paired comparison experimental designs is the difficulty in incorporating prior information, which can be particularly relevant for the design of tournament schedules for players of games and sports. Pairing methods that make use of prior information are often ad hoc algorithms with little or no formal basis. The problem of pairing objects can be formalized as a Bayesian optimal design. Assuming a linear paired comparison model for outcomes, we develop a pairing method that maximizes the expected gain in Kullback–Leibler information from the prior to the posterior distribution. The optimal pairing is determined using a combinatorial optimization method commonly used in graph-theoretic contexts. We discuss the properties of our optimal pairing criterion, and demonstrate our method as an adaptive procedure for pairing objects multiple times. We compare the performance of our method on simulated data against random pairings, and against a system that is currently in use in tournament chess.  相似文献   

6.
For the problem of percentile estimation of a quantal response curve, the authors determine multiobjective designs which are robust with respect to misspecifications of the model assumptions. They propose a maximin approach based on efficiencies which leads to designs that are simultaneously efficient with respect to various choices of link functions and parameter regions. Furthermore, the authors deal with the problems of designing model and percentile robust experiments. They give various examples of such designs, which are calculated numerically.  相似文献   

7.
This paper studies the optimal experimental design problem to discriminate two regression models. Recently, López-Fidalgo et al. [2007. An optimal experimental design criterion for discriminating between non-normal models. J. Roy. Statist. Soc. B 69, 231–242] extended the conventional T-optimality criterion by Atkinson and Fedorov [1975a. The designs of experiments for discriminating between two rival models. Biometrika 62, 57–70; 1975b. Optimal design: experiments for discriminating between several models. Biometrika 62, 289–303] to deal with non-normal parametric regression models, and proposed a new optimal experimental design criterion based on the Kullback–Leibler information divergence. In this paper, we extend their parametric optimality criterion to a semiparametric setup, where we only need to specify some moment conditions for the null or alternative regression model. Our criteria, called the semiparametric Kullback–Leibler optimality criteria, can be implemented by applying a convex duality result of partially finite convex programming. The proposed method is illustrated by a simple numerical example.  相似文献   

8.
We propose a new bivariate negative binomial model with constant correlation structure, which was derived from a contagious bivariate distribution of two independent Poisson mass functions, by mixing the proposed bivariate gamma type density with constantly correlated covariance structure (Iwasaki & Tsubaki, 2005), which satisfies the integrability condition of McCullagh & Nelder (1989, p. 334). The proposed bivariate gamma type density comes from a natural exponential family. Joe (1997) points out the necessity of a multivariate gamma distribution to derive a multivariate distribution with negative binomial margins, and the luck of a convenient form of multivariate gamma distribution to get a model with greater flexibility in a dependent structure with indices of dispersion. In this paper we first derive a new bivariate negative binomial distribution as well as the first two cumulants, and, secondly, formulate bivariate generalized linear models with a constantly correlated negative binomial covariance structure in addition to the moment estimator of the components of the matrix. We finally fit the bivariate negative binomial models to two correlated environmental data sets.  相似文献   

9.
In a two‐level factorial experiment, the authors consider designs with partial duplication which permit estimation of the constant term, all main effects and some specified two‐factor interactions, assuming that the other effects are negligible. They construct parallel‐flats designs with two identical parallel flats that meet prior specifications; they also consider classes of 3‐flat and 4‐flat designs. They show that the designs obtained can have a very simple covariance structure and high D‐efficiency. They give an algorithm from which they generate a series of practical designs with run sizes 12, 16, 24, and 32.  相似文献   

10.
In this article, we deal with the problem of testing a point null hypothesis for the mean of a multivariate power exponential distribution. We study the conditions under which Bayesian and frequentist approaches can match. In this comparison it is observed that the tails of the model are the key to explain the reconciliability or irreconciliability between the two approaches.  相似文献   

11.
To deal with high placebo response in clinical trials for psychiatric and other diseases, different enrichment designs, such as the sequential parallel design, two‐way enriched design, and sequential enriched design, have been proposed and implemented recently. Depending on the historical trial information and the trial sponsors' resources, detailed design elements are needed for determining which design to adopt. To assist in making more suitable decisions, we perform evaluations for selecting required design elements in terms of power optimization and sample size planning. We also discuss the implementation of the interim analysis related to its applicability.  相似文献   

12.
Classical regression analysis is usually performed in two steps. In the first step, an appropriate model is identified to describe the data generating process and in the second step, statistical inference is performed in the identified model. An intuitively appealing approach to the design of experiment for these different purposes are sequential strategies, which use parts of the sample for model identification and adapt the design according to the outcome of the identification steps. In this article, we investigate the finite sample properties of two sequential design strategies, which were recently proposed in the literature. A detailed comparison of sequential designs for model discrimination in several regression models is given by means of a simulation study. Some non-sequential designs are also included in the study.  相似文献   

13.
A model for media exposure probabilities is developed which has the joint probability of exposure proportional to the product of the marginal probabilities. The model is a generalization of Goodhardt & Ehrenberg's ‘duplication of viewing law’, with the duplication constant computed from a truncated canonical expansion of the joint exposure probability. The proposed model is compared on the basis of estimation accuracy and computation speed with an accurate and quick ‘approximate’ log-linear model (as noted previously)and the popular Metheringham beta-binomial model. Our model is shown to be more accurate than the approximate log-linear model and four times faster. In addition, it is much more accurate than Metheringham's model.  相似文献   

14.
Social network data represent the interactions between a group of social actors. Interactions between colleagues and friendship networks are typical examples of such data.The latent space model for social network data locates each actor in a network in a latent (social) space and models the probability of an interaction between two actors as a function of their locations. The latent position cluster model extends the latent space model to deal with network data in which clusters of actors exist — actor locations are drawn from a finite mixture model, each component of which represents a cluster of actors.A mixture of experts model builds on the structure of a mixture model by taking account of both observations and associated covariates when modeling a heterogeneous population. Herein, a mixture of experts extension of the latent position cluster model is developed. The mixture of experts framework allows covariates to enter the latent position cluster model in a number of ways, yielding different model interpretations.Estimates of the model parameters are derived in a Bayesian framework using a Markov Chain Monte Carlo algorithm. The algorithm is generally computationally expensive — surrogate proposal distributions which shadow the target distributions are derived, reducing the computational burden.The methodology is demonstrated through an illustrative example detailing relationships between a group of lawyers in the USA.  相似文献   

15.
After initiating the theory of optimal design by Smith (1918), many optimality criteria were introduced. Atkinson et al. (2007) used the definition of compound design criteria to combine two optimality criteria and introduced the DT- and CD-optimalities criteria. This paper introduces the CDT-optimum design that provides a specified balance between model discrimination, parameter estimation and estimation of a parametric function such as the area under curve in models for drug absorbance. An equivalence theorem is presented for the case of two models.  相似文献   

16.
An easily programmed recursive formula for the evaluation of the distribution function of ratios of linear combinations of independent exponential random variables is developed. This formula is shown to yield the probability that one team beats another in a contest we call the special gladiator game. This game generates tournaments which exhibit nontransitive dominance and have some surprising consequences. Similar results are obtained for a recursive formula based on the geometric distribution.  相似文献   

17.
《随机性模型》2013,29(2-3):449-464
ABSTRACT

We compare four strategies for ensuring a reliable just-in-time supply from a seat production line, which is prone to machine failure, to a car assembly line, which is assumed to operate at a constant speed over single shifts. The strategies are as follows: holding buffer stock; duplication of the least reliable machine; duplication of the production line as a stand-by; and running two production lines concurrently. Times between machine failures are assumed to have independent exponential distributions. A general distribution of repair times is allowed for by using phase-type representations. We show the stationary distribution for these models, and compare stationary distributions with average times within levels over shifts conditional on all machines working at the start of a shift. We compute moments of sojourn times within an arbitrary subset of states, which are relevant when cost is a non-linear function of downtime. We use first passage time results to obtain probabilities of line failure within a shift, and use these results to compare the four strategies.  相似文献   

18.
Quantile regression methods have been widely used in many research areas in recent years. However conventional estimation methods for quantile regression models do not guarantee that the estimated quantile curves will be non‐crossing. While there are various methods in the literature to deal with this problem, many of these methods force the model parameters to lie within a subset of the parameter space in order for the required monotonicity to be satisfied. Note that different methods may use different subspaces of the space of model parameters. This paper establishes a relationship between the monotonicity of the estimated conditional quantiles and the comonotonicity of the model parameters. We develope a novel quasi‐Bayesian method for parameter estimation which can be used to deal with both time series and independent statistical data. Simulation studies and an application to real financial returns show that the proposed method has the potential to be very useful in practice.  相似文献   

19.
In this paper, a Bayesian two-stage D–D optimal design for mixture experimental models under model uncertainty is developed. A Bayesian D-optimality criterion is used in the first stage to minimize the determinant of the posterior variances of the parameters. The second stage design is then generated according to an optimalityprocedure that collaborates with the improved model from the first stage data. The results show that a Bayesian two-stage D–D-optimal design for mixture experiments under model uncertainty is more efficient than both the Bayesian one-stage D-optimal design and the non-Bayesian one-stage D-optimal design in most situations. Furthermore, simulations are used to obtain a reasonable ratio of the sample sizes between the two stages.  相似文献   

20.
The problem of a sample allocation between strata in the case of multiparameter surveys is considered in this article. There are several multivariate sample allocation methods and, moreover, several criteria to deal with in such a case. A maximum coefficient of variation of estimators of the population mean of characters under study is taken as the optimality criterion. This article contains a study on a group of the methods that are easy to implement and do not need complex numerical computation; however, they all are approximate. Five such methods are presented and compared using a simulation study. Finally, it is shown which methods should be considered when designing a survey in which the multivariate sample allocation is to be involved.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号