首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Although efficiency robust tests are preferred for genetic association studies when the genetic model is unknown, their statistical properties have been studied for different study designs separately under special situations. We study some statistical properties of the maximin efficiency robust test and a maximum‐type robust test (MAX3) under a general setting and obtain unified results. The results can also be applied to testing hypothesis with a constrained two‐dimensional parameter space. The results are applied to genetic association studies using case–parents trio data.  相似文献   

2.
The poor performance of the Wald method for constructing confidence intervals (CIs) for a binomial proportion has been demonstrated in a vast literature. The related problem of sample size determination needs to be updated and comparative studies are essential to understanding the performance of alternative methods. In this paper, the sample size is obtained for the Clopper–Pearson, Bayesian (Uniform and Jeffreys priors), Wilson, Agresti–Coull, Anscombe, and Wald methods. Two two-step procedures are used: one based on the expected length (EL) of the CI and another one on its first-order approximation. In the first step, all possible solutions that satisfy the optimal criterion are obtained. In the second step, a single solution is proposed according to a new criterion (e.g. highest coverage probability (CP)). In practice, it is expected a sample size reduction, therefore, we explore the behavior of the methods admitting 30% and 50% of losses. For all the methods, the ELs are inflated, as expected, but the coverage probabilities remain close to the original target (with few exceptions). It is not easy to suggest a method that is optimal throughout the range (0, 1) for p. Depending on whether the goal is to achieve CP approximately or above the nominal level different recommendations are made.  相似文献   

3.
According to the law of likelihood, statistical evidence for one (simple) hypothesis against another is measured by their likelihood ratio. When the experimenter can choose between two or more experiments (of approximately the same cost) to obtain data, he would want to know which experiment provides (on average) stronger true evidence for one hypothesis against another. In this article, after defining a pre-experimental criterion for the potential strength of evidence provided by an experiment, based on entropy distance, we compare the potential statistical evidence in lower record values with that in the same number of iid observations from the same parent distribution. We also establish a relation between Fisher information and Kullback–Leibler distance.  相似文献   

4.
Super-saturated designs in which the number of factors under investigation exceeds the number of experimental runs have been suggested for screening experiments initiated to identify important factors for future study. Most of the designs suggested in the literature are based on natural but ad hoc criteria. The “average s2” criteria introduced by Booth and Cox (Technometrics 4 (1962) 489) is a popular choice. Here, a decision theoretic approach is pursued leading to an optimality criterion based on misclassification probabilities in a Bayesian model. In certain cases, designs optimal under the average s2 criterion are also optimal for the new criterion. Necessary conditions for this to occur are presented. In addition, the new criterion often provides a strict preference between designs tied under the average s2 criterion, which is advantageous in numerical search as it reduces the number of local minima.  相似文献   

5.
Properties of the localized regression tree splitting criterion, described in Bremner & Taplin (2002) and referred to as the BT method, are explored in this paper and compared to those of Clark & Pregibon's (1992) criterion (the CP method). These properties indicate why the BT method can result in superior trees. This paper shows that the BT method exhibits a weak bias towards edge splits, and the CP method exhibits a strong bias towards central splits in the presence of main effects. A third criterion, called the SM method, that exhibits no bias towards a particular split position is introduced. The SM method is a modification of the BT method that uses more symmetric local means. The BT and SM methods are more likely to split at a discontinuity than the CP method because of their relatively low bias towards particular split positions. The paper shows that the BT and SM methods can be used to discover discontinuities in the data, and that they offer a way of producing a variety of different trees for examination or for tree averaging methods.  相似文献   

6.
In this article, we introduce genetic algorithms (GAs) as a viable tool in estimating parameters in a wide array of statistical models. We performed simulation studies that compared the bias and variance of GAs with classical tools, namely, the steepest descent, Gauss–Newton, Levenberg–Marquardt and don't use derivative methods. In our simulation studies, we used the least squares criterion as the optimizing function. The performance of the GAs and classical methods were compared under the logistic regression model; non-linear Gaussian model and non-linear non-Gaussian model. We report that the GAs' performance is competitive to the classical methods under these three models.  相似文献   

7.
8.
In this paper, the statistical inference of the unknown parameters of a two-parameter inverse Weibull (IW) distribution based on the progressive type-II censored sample has been considered. The maximum likelihood estimators (MLEs) cannot be obtained in explicit forms, hence the approximate MLEs are proposed, which are in explicit forms. The Bayes and generalized Bayes estimators for the IW parameters and the reliability function based on the squared error and Linex loss functions are provided. The Bayes and generalized Bayes estimators cannot be obtained explicitly, hence Lindley's approximation is used to obtain the Bayes and generalized Bayes estimators. Furthermore, the highest posterior density credible intervals of the unknown parameters based on Gibbs sampling technique are computed, and using an optimality criterion the optimal censoring scheme has been suggested. Simulation experiments are performed to see the effectiveness of the different estimators. Finally, two data sets have been analysed for illustrative purposes.  相似文献   

9.
This paper considers the problem of the design and analysis of experiments for comparing several treatments with a control when heterogeneity is to be eliminated in two directions. A class of row-column designs which are balanced for treatment vs. control comparisons (referred to as the balanced treatment vs. control row-column or BTCRC designs) is proposed. These designs are analogs of the so-called BTIB designs proposed by Bechhofer and Tamhane (Technometrics 23 (1981) 45–57) for eliminating heterogeneity in one direction. Some methods of analysis and construction of these designs are given. A measure of efficiency of BTCRC designs in terms of the A-optimality criterion is derived and illustrated by several examples.  相似文献   

10.
In clinical studies, pairwise comparisons are frequently performed to examine differences in efficacy between treatments. The statistical methods of pairwise comparisons are available when treatment responses are measured on an ordinal scale. The Wilcoxon–Mann–Whitney test and the latent normal model are popular examples. However, these procedures cannot be used to compare treatments in parallel groups (a two-way design) when overall type I error must be controlled. In this paper, we explore statistical approaches to the pairwise testing of treatments that satisfy the requirements of a two-way layout. The results of our simulation indicate that the latent normal approach is superior to the Wilcoxon–Mann–Whitney test. Clinical examples are used to illustrate our suggested testing methods.  相似文献   

11.
In Computer Experiments (CE), a careful selection of the design points is essential for predicting the system response at untried points, based on the values observed at tried points. In physical experiments, the protocol is based on Design of Experiments, a methodology whose basic principles are questioned in CE. When the responses of a CE are modeled as jointly Gaussian random variables with their covariance depending on the distance between points, the use of the so called space-filling designs (random designs, stratified designs and Latin Hypercube designs) is a common choice, because it is expected that the nearer the untried point is to the design points, the better is the prediction. In this paper we focus on the class of Latin Hypercube (LH) designs. The behavior of various LH designs is examined according to the Gaussian assumption with exponential correlation, in order to minimize the total prediction error at the points of a regular lattice. In such a special case, the problem is reduced to an algebraic statistical model, which is solved using both symbolic algebraic software and statistical software. We provide closed-form computation of the variance of the Gaussian linear predictor as a function of the design, in order to make a comparison between LH designs. In principle, the method applies to any number of factors and any number of levels, and also to classes of designs other than LHs. In our current implementation, the applicability is limited by the high computational complexity of the algorithms involved.  相似文献   

12.
Ever since R. A. Fisher published his 1936 article , "Has Mendel's Work Been Rediscovered?", historians of both biclogy and statistics have been fascinated by the surprisingly high conformity between Gregor (Johann) Mendel's observed and expected ratios in his famous experiments with peas. Fisher's calculatftl x2 statistic of the experiments, taken as a whole, suggested that results on a par or better than those Mendel reported coultl only be expected to occur about three times in every 100,000 attempts. The ensuing controversy as to whether or not the good Father "sophisticated" his data has continued to this very day. In recent years the controversy has focused upon the more technical question of what underlying genetic arrangement Mendel actually studied.

The statistical issues of the controversy are exam:.led in am historical and comparative perspective. The changes the controversy has gone through are evaluated, and the nature of its current, more biological, status is briefly discussed.  相似文献   

13.
A Bayesian design criterion for selection experiments in plant breeding is derived using a utility function that minimizes the risk of an incorrect selection. A prior distribution on the heritability parameter is used to complete the definition of the design optimality criterion. An example is given with evaluations of the criterion for different prior distributions on the heritability. Though coming from a genetic motivation this criterion should prove useful for any other types of experiments with random treatment effects.  相似文献   

14.
Density estimates that are expressible as the product of a base density function and a linear combination of orthogonal polynomials are considered in this paper. More specifically, two criteria are proposed for determining the number of terms to be included in the polynomial adjustment component and guidelines are suggested for the selection of a suitable base density function. A simulation study reveals that these stopping rules produce density estimates that are generally more accurate than kernel density estimates or those resulting from the application of the Kronmal–Tarter criterion. Additionally, it is explained that the same approach can be utilized to obtain multivariate density estimates. The proposed orthogonal polynomial density estimation methodology is applied to several univariate and bivariate data sets, some of which have served as benchmarks in the statistical literature on density estimation.  相似文献   

15.
In studies about sensitive characteristics, randomized response (RR) methods are useful for generating reliable data, protecting respondents’ privacy. It is shown that all RR surveys for estimating a proportion can be encompassed in a common model and some general results for statistical inferences can be used for any given survey. The concepts of design and scheme are introduced for characterizing RR surveys. Some consequences of comparing RR designs based on statistical measures of efficiency and respondent’ protection are discussed. In particular, such comparisons lead to the designs that may not be suitable in practice. It is suggested that one should consider other criteria and the scheme parameters for planning a RR survey.  相似文献   

16.
It is often necessary to make sampling-based statistical inference about many probability distributions in parallel. Given a finite computational resource, this article addresses how to optimally divide sampling effort between the samplers of the different distributions. Formally approaching this decision problem requires both the specification of an error criterion to assess how well each group of samples represent their underlying distribution, and a loss function to combine the errors into an overall performance score. For the first part, a new Monte Carlo divergence error criterion based on Jensen–Shannon divergence is proposed. Using results from information theory, approximations are derived for estimating this criterion for each target based on a single run, enabling adaptive sample size choices to be made during sampling.  相似文献   

17.
Efforts have been made in the literature to find optimal single arrays which work best for the robust parameter experiments. However, examples show that in many cases the optimal designs obtained by the existing criteria cloud not attain the maximum number of clear interested effects for robust parameter experiments. In this paper, through a similar way of Zhang et al. (2008) (ZLZA, in short), an aliasing pattern to measure the confounding between the interested effects and other effects for the case of robust parameter designs is introduced. A new criterion for selecting optimal two-level regular single arrays is proposed. In the consideration of the criterion, two rank-orders of effects are suggested: one is based on the interest of experimenters and the other is under the usual effect hierarchy principle. The optimal designs are tabulated in the appendix.  相似文献   

18.
A general rank test procedure based on an underlying multinomial distribution is suggested for randomized block experiments with multifactor treatment combinations within each block. The Wald statistic for the multinomial is used to test hypotheses about the within–block rankings. This statistic is shown to be related to the one–sample Hotellingt's T2 statistic, suggesting a method for computing the test statistic using the standard statistical computer packages.  相似文献   

19.
The anonymous mixing of Fisherian (p-values) and Neyman–Pearsonian (α levels) ideas about testing, distilled in the customary but misleading p < α criterion of statistical significance, has led researchers in the social and management sciences (and elsewhere) to commonly misinterpret the p-value as a ‘data-adjusted’ Type I error rate. Evidence substantiating this claim is provided from a number of fronts, including comments by statisticians, articles judging the value of significance testing, textbooks, surveys of scholars, and the statistical reporting behaviours of applied researchers. That many investigators do not know the difference between p’s and α’s indicates much bewilderment over what those most ardently sought research outcomes—statistically significant results—means. Statisticians can play a leading role in clearing this confusion. A good starting point would be to abolish the p < α criterion of statistical significance.  相似文献   

20.
Linear mixed‐effects models are a powerful tool for modelling longitudinal data and are widely used in practice. For a given set of covariates in a linear mixed‐effects model, selecting the covariance structure of random effects is an important problem. In this paper, we develop a joint likelihood‐based selection criterion. Our criterion is the approximately unbiased estimator of the expected Kullback–Leibler information. This criterion is also asymptotically optimal in the sense that for large samples, estimates based on the covariance matrix selected by the criterion minimize the approximate Kullback–Leibler information. Finite sample performance of the proposed method is assessed by simulation experiments. As an illustration, the criterion is applied to a data set from an AIDS clinical trial.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号