首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Owen值是合作博弈中比较重要的一种解,而指派博弈恰好是Owen值在现实中的应用。鉴此,提出一类具有模糊收益值的区间指派博弈,并利用区间数的相关理论,在通过建立公理化体系对区间指派博弈的Owen值进行研究的同时,给出一个算例分析并验证其正确性。  相似文献   

2.
A dynamic treatment regime is a sequence of decision rules for assigning treatment based on a patient’s current need for treatment. Dynamic regimes are viewed, by many, as a natural way of treating patients with chronic diseases; that is, treating patients with adaptive, complex, longitudinal treatment regimens. In developing dynamic treatment strategies, treatment-competing events may play an important role in the overall treatment strategy, and their effects on subsequent treatment decisions and eventual outcome should be considered. Treatment-competing events may be defined generally as patient-specific, random events which interrupt the ongoing treatment decision process in a dynamic regime. Treatment-competing events censor later treatment decisions that would otherwise be made on a particular dynamic treatment regime had the competing events not occurred. For example, in therapeutic studies of HIV, physicians may assign treatment based on a patient’s current level HIV1-RNA; this defines a treatment assignment rule. However, the presence of opportunistic infections or severe adverse events may preclude a strict adherence of the treatment assignment rule. In other contexts, the “censoring”-by-death phenomenon may be viewed as an example of a treatment-competing event for a particular dynamic treatment regime. Treatment-competing events can be built into the dynamic treatment regime framework and counting processes are a natural mechanism to facilitate this development. In this paper, we develop treatment-competing events in a dynamic infusion policy, a random dynamic treatment regime where multiple infusion treatments are initiated simultaneously and given continuously over time subject to the presence/absence of a treatment-competing event. We illustrate how our methodology may be used to suggest an estimator for a particular causal estimand of recent interest. Finally, we exemplify our methods in a recent study of patients undergoing coronary stent implantation.  相似文献   

3.
A complex experiment with qualirarive factors influencing the outcome of the experiment can be seen as a general ANOVA setup. A design of such an experiment will be the assignment at which of the possible levels of the factors the actual experiment should be performed. In this paper optimal designs of such experiments will be characterized with respect to three different optimality criteria including the so called uniform optimality of a design. The possible applications of the main optimization result providing these characterizations can be used to more general experiments. The particular results on these generalizations will be indicated at the end of this paper.  相似文献   

4.
Multi-objective flexible job shop scheduling problem with fuzzy processing time and fuzzy due date is a complicated combinatorial optimization problem. In this paper, a genetic global optimization is combined with a local search method to construct an effective memetic algorithm (MA) for simultaneously optimizing fuzzy makespan, average agreement index and minimal agreement index. First, a hybridization of different machine assignment methods with different operation sequence rules is proposed to generate a high-performance initial population. Second, the algorithm framework similar to the non-dominated sorting genetic algorithm II (NSGA-II) is adopted, in which a well-designed chromosome decoding method and two effective genetic operators are used. Then, a novel fuzzy Pareto dominance relationship based on the possibility degree and a modified crowding distance measure are defined and further employed to modify the fast non-dominated sorting. Next, a novel local search is incorporated into NSGA-II, where some candidate individuals are selected from the offspring population to experience variable neighbourhood local search by using the selection mechanism. In the experiment, the influence of four key parameters is investigated based on the Taguchi method of design of experiment. Finally, some comparisons are carried out with other existing algorithms on benchmark instances, and demonstrate the effectiveness of the proposed MA.  相似文献   

5.
李静萍 《统计研究》2008,25(3):65-70
土地作为一种非生产资产,是国民经济核算中的一项重要内容,但是在中国,土地交易一直没有纳入国民经济核算体系。近年来,随着城镇土地使用制的改革,土地交易的核算越来越引起人们的关注。本文讨论了中国城市土地出让金核算所面临的困难,在1993年SNA的核算原则和方法的基础上,辨析了两种不同的核算思路(将土地出让金作为地租处理和作为租约价值处理)在中国国民经济核算体系下的优缺点,指出前一种核算方法更具有现实意义,并对地租的分摊方法进行了讨论。  相似文献   

6.
Proper scoring rules are devices for encouraging honest assessment of probability distributions. Just like log‐likelihood, which is a special case, a proper scoring rule can be applied to supply an unbiased estimating equation for any statistical model, and the theory of such equations can be applied to understand the properties of the associated estimator. In this paper, we discuss some novel applications of scoring rules to parametric inference. In particular, we focus on scoring rule test statistics, and we propose suitable adjustments to allow reference to the usual asymptotic chi‐squared distribution. We further explore robustness and interval estimation properties, by both theory and simulations.  相似文献   

7.
A method is presented for the sequential analysis of experiments involving two treatments to which response is dichotomous. Composite hypotheses about the difference in success probabilities are tested, and covariate information is utilized in the analysis. The method is based upon a generalization of Bartlett’s (1946) procedure for using the maximum likelihood estimate of a nuisance parameter in a Sequential Probability Ratio Test (SPRT). Treatment assignment rules studied include pure randomization, randomized blocks, and an adaptive rule which tends to assign the superior treatment to the majority of subjects. It is shown that the use of covariate information can result in important reductions in the expected sample size for specified error probabilities, and that the use of covariate information is essential for the elimination of bias when adaptive assignment rules are employed. Designs of the type presented are easily generated, as the termination criterion is the same as for a Wald SPRT of simple hypotheses.  相似文献   

8.
In this paper two general rules for checking Yates's algorithm are given. With these rules one can check any of the intermediate columns as well as the last column, using any of the preceding columns including the observation column. Rules of Good and Quenouille are special cases of these rules.  相似文献   

9.
We introduce the concept of inferential distributions corresponding to inference rules. Fiducial and posterior distributions are special cases. Inferential distributions are essentially unique. They correspond to or represent inference rules and are defined on the parameter space. Not all inference rules can be represented by an inferential distribution. A constructive method is given to investigate its existence for any given inference rule.  相似文献   

10.
Finding the maximum a posteriori (MAP) assignment of a discrete-state distribution specified by a graphical model requires solving an integer program. The max-product algorithm, also known as the max-plus or min-sum algorithm, is an iterative method for (approximately) solving such a problem on graphs with cycles. We provide a novel perspective on the algorithm, which is based on the idea of reparameterizing the distribution in terms of so-called pseudo-max-marginals on nodes and edges of the graph. This viewpoint provides conceptual insight into the max-product algorithm in application to graphs with cycles. First, we prove the existence of max-product fixed points for positive distributions on arbitrary graphs. Next, we show that the approximate max-marginals computed by max-product are guaranteed to be consistent, in a suitable sense to be defined, over every tree of the graph. We then turn to characterizing the nature of the approximation to the MAP assignment computed by max-product. We generalize previous work by showing that for any graph, the max-product assignment satisfies a particular optimality condition with respect to any subgraph containing at most one cycle per connected component. We use this optimality condition to derive upper bounds on the difference between the log probability of the true MAP assignment, and the log probability of a max-product assignment. Finally, we consider extensions of the max-product algorithm that operate over higher-order cliques, and show how our reparameterization analysis extends in a natural manner.  相似文献   

11.
Noting that several rule discovery algorithms in data mining can produce a large number of irrelevant or obvious rules from data, there has been substantial research in data mining that addressed the issue of what makes rules truly 'interesting'. This resulted in the development of a number of interestingness measures and algorithms that find all interesting rules from data. However, these approaches have the drawback that many of the discovered rules, while supposed to be interesting by definition, may actually (1) be obvious in that they logically follow from other discovered rules or (2) be expected given some of the other discovered rules and some simple distributional assumptions. In this paper we argue that this is a paradox since rules that are supposed to be interesting, in reality are uninteresting for the above reason. We show that this paradox exists for various popular interestingness measures and present an abstract characterization of an approach to alleviate the paradox. We finally discuss existing work in data mining that addresses this issue and show how these approaches can be viewed with respect to the characterization presented here.  相似文献   

12.
Consider a planner choosing treatments for observationally identical persons who vary in their response to treatment. There are two treatments with binary outcomes. One is a status quo with known population success rate. The other is an innovation for which the data are the outcomes of an experiment. Karlin and Rubin [1956. The theory of decision procedures for distributions with monotone likelihood ratio. Ann. Math. Statist. 27, 272–299] assumed that the objective is to maximize the population success rate and showed that the admissible rules are the KR-monotone   rules. These assign everyone to the status quo if the number of experimental successes is below a specified threshold and everyone to the innovation if experimental success exceeds the threshold. We assume that the objective is to maximize a concave-monotone function f(·)f(·) of the success rate and show that admissibility depends on the curvature of f(·)f(·). Let a fractional monotone   rule be one where the fraction of persons assigned to the innovation weakly increases with the number of experimental successes. We show that the class of fractional monotone rules is complete if f(·)f(·) is concave and strictly monotone. Define an M-step monotone rule   to be a fractional monotone rule with an interior fractional treatment assignment for no more than MM consecutive values of the number of experimental successes. The MM-step monotone rules form a complete class if f(·)f(·) is differentiable and has sufficiently weak curvature. Bayes rules and the minimax-regret rule depend on the curvature of the welfare function.  相似文献   

13.
We display pseudo-likelihood as a special case of a general estimation technique based on proper scoring rules. Such a rule supplies an unbiased estimating equation for any statistical model, and this can be extended to allow for missing data. When the scoring rule has a simple local structure, as in many spatial models, the need to compute problematic normalising constants is avoided. We illustrate the approach through an analysis of data on disease in bell pepper plants.  相似文献   

14.
An up-and-down (UD) experiment for estimating a given quantile of a binary response curve is a sequential procedure whereby at each step a given treatment level is used and, according to the outcome of the observations, a decision is made (deterministic or randomized) as to whether to maintain the same treatment or increase it by one level or else to decrease it by one level. The design points of such UD rules generate a Markov chain and the mode of its invariant distribution is an approximation to the quantile of interest. The main area of application of UD algorithms is in Phase I clinical trials, where it is of greatest importance to be able to attain reliable results in small-size experiments. In this paper we address the issues of the speed of convergence and the precision of quantile estimates of such procedures, both in theory and by simulation. We prove that the version of UD designs introduced in 1994 by Durham and Flournoy can in a large number of cases be regarded as optimal among all UD rules. Furthermore, in order to improve on the convergence properties of this algorithm, we propose a second-order UD experiment which, instead of making use of just the most recent observation, bases the next step on the outcomes of the last two. This procedure shares a number of desirable properties with the corresponding first-order designs, and also allows greater flexibility. With a suitable choice of the parameters, the new scheme is at least as good as the first-order one and leads to an improvement of the quantile estimates when the starting point of the algorithm is low relative to the target quantile.  相似文献   

15.
Some new censoring schemes for comparing lifetimes of machines were introduced in Srivastava (1987), wherein it was shown that in general, these improve on the accuracy and also reduce the total expected time under experiment as well as the expected length of the experiment. Although the above studies were initiated under the generalized WeibuU distribution, the detailed development of the theory for the expected time or length were made under the Weibull only. In this paper, for a certain important special case, this development is extended to the generalized Weibull. Also, an error in the above paper is pointed out, and corrected, showing that the new schemes are actually superior to what they appeared to be there. Furthermore, under a realistic cost function, the problem of planning such experiments is discussed. An example for machines with a series connection is provided.  相似文献   

16.
The technique of semifolding is used to develop the 2 n?p designs. Based on the initial analysis, some factors may be more important than others. In other words, the results from analyzing the original experiment may suggest a specific set of effects to be de-aliased. On the other hand, some previously acquired information might be available for specific factors. In these cases, one may desire to isolate the main effects of these factors and each of their two-factor interactions in the experiments. Four rules that are presented in this article can systematically isolate effects of potential interest, which should serve well for researchers in all disciplines. The combined design, by semifolding, provides estimates of the interactions that involve specific factors so that the alias chains of the two-factor interactions can be broken.  相似文献   

17.
The randomized complete block design is one of the most widely used experimental designs to systematically control the variability arising from known nuisance sources. The balanced mixed effects model is usually appropriate for such an experiment when the blocks used in the experiment are randomly chosen. In applications with k increasing or decreasing treatment levels, there is sometimes prior knowledge about the ordering of the treatment effects. The most commonly seen orderings include simple ordering, simple tree ordering and umbrella orderings with known or unknown peaks. A natural question is how to incorporate the prior ordering information in estimating the parameters in a balanced mixed effects model so that the estimated treatment effects are consistent with the prior information and the estimated variances of the block effects and experiment errors are nonnegative. In this paper we derive the maximum likelihood estimators of the parameters in a balanced mixed model subject to any partial ordering of the treatment effects, which includes the usual maximum likelihood estimators as a special case. An example is provided to illustrate the results.  相似文献   

18.
The analysis of a general k-factor factorial experiment having unequal numbers of observations per cell is complex. For the special case of a 2 k experiment with unequal numbers of observations per cell, the method of unweighted means provides a simple vehicle for analysis that requires no matrix inversion and can be used with existing software programs for the analysis of balanced data. All numerator sums of squares for testing main effects and interactions are χ2 with one degree of freedom. In addition, for tests having one degree of freedom in any factorial experiment, the method of unweighted means may be modified to yield exact tests.  相似文献   

19.
A new statistical approach is developed for estimating the carcinogenic potential of drugs and other chemical substances used by humans. Improved statistical methods are developed for rodent tumorigenicity assays that have interval sacrifices but not cause-of-death data. For such experiments, this paper proposes a nonparametric maximum likelihood estimation method for estimating the distributions of the time to onset of and the time to death from the tumour. The log-likelihood function is optimized using a constrained direct search procedure. Using the maximum likelihood estimators, the number of fatal tumours in an experiment can be imputed. By applying the procedure proposed to a real data set, the effect of calorie restriction is investigated. In this study, we found that calorie restriction delays the tumour onset time significantly for pituitary tumours. The present method can result in substantial economic savings by relieving the need for a case-by-case assignment of the cause of death or context of observation by pathologists. The ultimate goal of the method proposed is to use the imputed number of fatal tumours to modify Peto's International Agency for Research on Cancer test for application to tumorigenicity assays that lack cause-of-death data.  相似文献   

20.
In this paper, we obtain minimax and near-minimax nonrandomized decision rules under zero–one loss for a restricted location parameter of an absolutely continuous distribution. Two types of rules are addressed: monotone and nonmonotone. A complete-class theorem is proved for the monotone case. This theorem extends the previous work of Zeytinoglu and Mintz (1984) to the case of 2e-MLR sampling distributions. A class of continuous monotone nondecreasing rules is defined. This class contains the monotone minimax rules developed in this paper. It is shown that each rule in this class is Bayes with respect to nondenumerably many priors. A procedure for generating these priors is presented. Nonmonotone near-minimax almost-equalizer rules are derived for problems characterized by non-2e-MLR distributions. The derivation is based on the evaluation of a distribution-dependent function Qc. The methodological importance of this function is that it is used to unify the discrete- and continuous-parameter problems, and to obtain a lower bound on the minimax risk for the non-2e-MLR case.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号