排序方式: 共有22条查询结果,搜索用时 812 毫秒
11.
The aim of this paper is to develop a general, unified approach, based on some partial estimation functions which we call “Z-process”, to some change point problems in mathematical statistics. The method proposed can be applied not only to ergodic models but also to some models where the Fisher information matrix is random. Applications to some concrete models, including a parametric model for volatilities of diffusion processes are presented. Simulations for randomly time-transformed Brownian bridge process appearing as the limit of the proposed test statistics are performed with computer intensive use. 相似文献
12.
Ilia Tsetlin 《Theory and Decision》2006,61(1):51-62
Designing a mechanism that provides a direct incentive for an individual to report her utility function over several alternatives
is a difficult task. A framework for such mechanism design is the following: an individual (a decision maker) is faced with
an optimization problem (e.g., maximization of expected utility), and a mechanism designer observes the decision maker’s action.
The mechanism does reveal the individual’s utility truthfully if the mechanism designer, having observed the decision maker’s
action, infers the decision maker’s utilities over several alternatives. This paper studies an example of such a mechanism
and discusses its application to the problem of optimal social choice. Under certain simplifying assumptions about individuals’
utility functions and about how voters choose their voting strategies, this mechanism selects the alternative that maximizes
Harsanyi’s social utility function and is Pareto-efficient. 相似文献
13.
We consider the production process of a manufacturing workcell. Production items obtained from an outside supplier are not processed adequately as far as their quality is concerned. Production items meeting the required quality depend on the workcell state, which degrades according to the number of produced items. The workcell is completely restored by some restoring operations leading to its as-new condition. The method of deriving the restoration period, which leads to the maximum probability that produced items meet the required quality, is introduced. It is based on the nontraditional approach, i.e. on the simplest strategies method for the formulation of the problem presented here. The implementation of this optimization approach is illustrated with an example. 相似文献
14.
While majority cycles may pose a threat to democratic decision making, actual decisions based inadvertently upon an incorrect
majority preference relation may be far more expensive to society. We study majority rule both in a statistical sampling and
a Bayesian inference framework. Based on any given paired comparison probabilities or ranking probabilities in a population
(i.e., culture) of reference, we derive upper and lower bounds on the probability of a correct or incorrect majority social
welfare relation in a random sample (with replacement). We also present upper and lower bounds on the probabilities of majority
preference relations in the population given a sample, using Bayesian updating. These bounds permit to map quite precisely
the entire picture of possible majority preference relations as well as their probabilities. We illustrate our results using
survey data.
Received: 13 November 2000/Accepted: 19 March 2002
This collaborative work was carried out while Regenwetter was a faculty member at the Fuqua School of Business, Duke University.
We thank Fuqua for sponsoring our collaboration and the National Science Foundation for grant SBR-97-30076 to Michel Regenwetter.
We are indebted to the editor and the referees, as well as to Jim Adams, Bob Clemen, Bernie Grofman, Bob Nau, Saša Pekeč,
Jim Smith and Bob Winkler for helpful comments and suggestions. 相似文献
15.
Ilia Vonta 《Journal of applied statistics》2010,37(8):1419-1420
16.
Ilia?TsetlinEmail author Michel?Regenwetter Bernard?Grofman 《Social Choice and Welfare》2003,21(3):387-398
Many papers have studied the probability of majority cycles, also called the Condorcet paradox, using the impartial culture or related distributional assumptions. While it is widely acknowledged that the impartial culture is unrealistic, conclusions drawn from the impartial culture are nevertheless still widely advertised and reproduced in textbooks. We demonstrate that the impartial culture is the worst case scenario among a very broad range of possible voter preference distributions. More specifically, any deviation from the impartial culture over linear orders reduces the probability of majority cycles in infinite samples unless the culture from which we sample is itself inherently intransitive. We prove this statement for the case of three candidates and we provide arguments for the conjecture that it extends to any number of candidates.All three authors thank the Fuqua School of Business for supporting their research collaboration. Regenwetter and Grofman gratefully acknowledge the precious support of the National Science Foundation through grant #SBR-9730076 on Probabilistic Models of Social Choice (Methodology, Measurement and Statistics program). We are grateful to the referees and we thank Saa Peke for critical comments on an earlier draft. Grofman thanks Scott L. Feld for numerous reminders about the implausibility of the impartial culture assumption which helped lead to this paper. 相似文献
17.
A sub threshold signal is transmitted through a channel and may be detected when some noise - with known structure and proportional to some level - is added to the data. There is an optimal noise level, called of stochastic resonance, that corresponds to the minimum variance of the estimators in the problem of recovering unobservable signals. For several noise structures it has been shown the evidence of stochastic resonance effect. Here we study the case when the noise is a Markovian process. We propose consistent estimators of the sub threshold signal and we solve further a problem of hypotheses testing. We also discuss evidence of stochastic resonance for both estimation and hypotheses testing problems via examples. 相似文献
18.
Supersaturated designs are factorial designs in which the number of potential effects is greater than the run size. They are commonly used in screening experiments, with the aim of identifying the dominant active factors with low cost. However, an important research field, which is poorly developed, is the analysis of such designs with non-normal response. In this article, we develop a variable selection strategy, through the modification of the PageRank algorithm, which is commonly used in the Google search engine for ranking Webpages. The proposed method incorporates an appropriate information theoretical measure into this algorithm and as a result, it can be efficiently used for factor screening. A noteworthy advantage of this procedure is that it allows the use of supersaturated designs for analyzing discrete data and therefore a generalized linear model is assumed. As it is depicted via a thorough simulation study, in which the Type I and Type II error rates are computed for a wide range of underlying models and designs, the presented approach can be considered quite advantageous and effective. 相似文献
19.
Abstract. This paper studies the representation and large-sample consistency for non-parametric maximum likelihood estimators (NPMLEs) of an unknown baseline continuous cumulative-hazard-type function and parameter of group survival difference, based on right-censored two-sample survival data with marginal survival function assumed to follow a transformation model, a slight generalization of the class of frailty survival regression models. The paper's main theoretical results are existence and unique a.s. limit, characterized variationally, for large data samples of the NPMLE of baseline nuisance function in an appropriately defined neighbourhood of the true function when the group difference parameter is fixed, leading to consistency of the NPMLE when the difference parameter is fixed at a consistent estimator of its true value. The joint NPMLE is also shown to be consistent. An algorithm for computing it numerically, based directly on likelihood equations in place of the expectation-maximization (EM) algorithm, is illustrated with real data. 相似文献
20.
The penalized likelihood approach of Fan and Li (2001, 2002) differs from the traditional variable selection procedures in that it deletes the non-significant variables by estimating their coefficients as zero. Nevertheless, the desirable performance of this shrinkage methodology relies heavily on an appropriate selection of the tuning parameter which is involved in the penalty functions. In this work, new estimates of the norm of the error are firstly proposed through the use of Kantorovich inequalities and, subsequently, applied to the frailty models framework. These estimates are used in order to derive a tuning parameter selection procedure for penalized frailty models and clustered data. In contrast with the standard methods, the proposed approach does not depend on resampling and therefore results in a considerable gain in computational time. Moreover, it produces improved results. Simulation studies are presented to support theoretical findings and two real medical data sets are analyzed. 相似文献