首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
The problem of combining coordinates in Stein-type estimators, when simultaneously estimating normal means, is considered. The question of deciding whether to use all coordinates in one combined shrinkage estimator or to separate into groups and use separate shrinkage estimators on each group is considered. A Bayesian viewpoint is (of necessity) taken, and it is shown that the ‘combined’ estimator is, somewhat surprisingly, often superior.  相似文献   

2.
In the absence of placebo‐controlled trials, the efficacy of a test treatment can be alternatively examined by showing its non‐inferiority to an active control; that is, the test treatment is not worse than the active control by a pre‐specified margin. The margin is based on the effect of the active control over placebo in historical studies. In other words, the non‐inferiority setup involves a network of direct and indirect comparisons between test treatment, active controls, and placebo. Given this framework, we consider a Bayesian network meta‐analysis that models the uncertainty and heterogeneity of the historical trials into the non‐inferiority trial in a data‐driven manner through the use of the Dirichlet process and power priors. Depending on whether placebo was present in the historical trials, two cases of non‐inferiority testing are discussed that are analogs of the synthesis and fixed‐margin approach. In each of these cases, the model provides a more reliable estimate of the control given its effect in other trials in the network, and, in the case where placebo was only present in the historical trials, the model can predict the effect of the test treatment over placebo as if placebo had been present in the non‐inferiority trial. It can further answer other questions of interest, such as comparative effectiveness of the test treatment among its comparators. More importantly, the model provides an opportunity for disproportionate randomization or the use of small sample sizes by allowing borrowing of information from a network of trials to draw explicit conclusions on non‐inferiority. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

3.
A survey of research by Emanuel Parzen on how quantile functions provide elegant and applicable formulas that unify many statistical methods, especially frequentist and Bayesian confidence intervals and prediction distributions. Section 0: In honor of Ted Anderson's 90th birthday; Section 1: Quantile functions, endpoints of prediction intervals; Section 2: Extreme value limit distributions; Sections 3, 4: Confidence and prediction endpoint function: Uniform(0,θ)(0,θ), exponential; Sections: 5, 6: Confidence quantile and Bayesian inference normal parameters μμ, σσ; Section 7: Two independent samples confidence quantiles; Section 8: Confidence quantiles for proportions, Wilson's formula. We propose ways that Bayesians and frequentists can be friends!  相似文献   

4.
This paper introduces a median estimator of the logistic regression parameters. It is defined as the classical L1L1-estimator applied to continuous data Z1,…,ZnZ1,,Zn obtained by a statistical smoothing of the original binary logistic regression observations Y1,…,YnY1,,Yn. Consistency and asymptotic normality of this estimator are proved. A method called enhancement is introduced which in some cases increases the efficiency of this estimator. Sensitivity to contaminations and leverage points is studied by simulations and compared in this manner with the sensitivity of some robust estimators previously introduced to the logistic regression. The new estimator appears to be more robust for larger sample sizes and higher levels of contamination.  相似文献   

5.
For normal populations with unequal variances, we develop matching priors and reference priors for a linear combination of the means. Here, we find three second-order matching priors: a highest posterior density (HPD) matching prior, a cumulative distribution function (CDF) matching prior, and a likelihood ratio (LR) matching prior. Furthermore, we show that the reference priors are all first-order matching priors, but that they do not satisfy the second-order matching criterion that establishes the symmetry and the unimodality of the posterior under the developed priors. The results of a simulation indicate that the second-order matching prior outperforms the reference priors in terms of matching the target coverage probabilities, in a frequentist sense. Finally, we compare the Bayesian credible intervals based on the developed priors with the confidence intervals derived from real data.  相似文献   

6.
7.
8.
The Pareto distribution is found in a large number of real world situations and is also a well-known model for extreme events. In the spirit of Neyman [1937. Smooth tests for goodness of fit. Skand. Aktuarietidskr. 20, 149–199] and Thomas and Pierce [1979. Neyman's smooth goodness-of-fit test when the hypothesis is composite. J. Amer. Statist. Assoc. 74, 441–445], we propose a smooth goodness of fit test for the Pareto distribution family which is motivated by LeCam's theory of local asymptotic normality (LAN). We establish the behavior of the associated test statistic firstly under the null hypothesis that the sample follows a Pareto distribution and secondly under local alternatives using the LAN framework. Finally, simulations are provided in order to study the finite sample behavior of the test statistic.  相似文献   

9.
A unified approach of parameter-estimation and goodness-of-fit testing is proposed. The new procedures may be applied to arbitrary laws with continuous distribution function. Specifically, both the method of estimation and the goodness-of-fit test are based on the idea of optimally transforming the original data to the uniform distribution, the criterion of optimality being an L2-type distance between the empirical characteristic function of the transformed data, and the characteristic function of the uniform (0,1)(0,1) distribution. Theoretical properties of the new estimators and tests are studied and some connections with classical statistics, moment-based procedures and non-parametric methods are investigated. Comparison with standard procedures via Monte Carlo is also included, along with a real-data application.  相似文献   

10.
Consider two independent normal populations. Let R denote the ratio of the variances. The usual procedure for testing H0: R = 1 vs. H1: R = r, where r≠1, is the F-test. Let θ denote the proportion of observations to be allocated to the first population. Here we find the value of θ that maximizes the rate at which the observed significance level of the F-test converges to zero under H1, as measured by the half slope.  相似文献   

11.
12.
Lu Lin 《Statistical Papers》2004,45(4):529-544
The quasi-score function, as defined by Wedderburn (1974) and McCullagh (1983) and so on, is a linear function of observations. The generalized quasi-score function introduced in this paper is a linear function of some unbiased basis functions, where the unbiased basis functions may be some linear functions of the observations or not, and can be easily constructed by the meaning of the parameters such as mean and median and so on. The generalized quasi-likelihood estimate obtained by such a generalized quasi-score function is consistent and has an asymptotically normal distribution. As a result, the optimum generalized quasi-score is obtained and a method to construct the optimum unbiased basis function is introduced. In order to construct the potential function, a conservative generalized estimating function is defined. By conservative, a potential function for the projected score has many properties of a log-likelihood function. Finally, some examples are given to illustrate the theoretical results. This paper is supported by NNSF project (10371059) of China and Youth Teacher Foundation of Nankai University.  相似文献   

13.
We discuss the general form of a first-order correction to the maximum likelihood estimator which is expressed in terms of the gradient of a function, which could for example be the logarithm of a prior density function. In terms of Kullback–Leibler divergence, the correction gives an asymptotic improvement over maximum likelihood under rather general conditions. The theory is illustrated for Bayes estimators with conjugate priors. The optimal choice of hyper-parameter to improve the maximum likelihood estimator is discussed. The results based on Kullback–Leibler risk are extended to a wide class of risk functions.  相似文献   

14.
In this paper tests of hypothesis are constructed for the family of skew normal distributions. The proposed tests utilize the fact that the moment generating function of the skew normal variable satisfies a simple differential equation. The empirical counterpart of this equation, involving the empirical moment generating function, yields simple consistent test statistics. Finite-sample results as well as results from real data are provided for the proposed procedures.  相似文献   

15.
Parametric mixture models are commonly used in the analysis of clustered data. Parametric families are specified for the conditional distribution of the response variable given a cluster-specific effect, and for the marginal distribution of the cluster-specific effects. This latter distribution is referred to as the mixing distribution. If the form of the mixing distribution is misspecified, then Bayesian and maximum-likelihood estimators of parameters associated with either distribution may be inconsistent. The magnitude of the asymptotic bias is investigated, using an approximation based on infinitesimal contamination of the mixing distribution. The approximation is useful when there is a closed-form expression for the marginal distribution of the response under the assumed mixing distribution, but not under the true mixing distribution. Typically this occurs when the assumed mixing distribution is conjugate, meaning that the conditional distribution of the cluster-specific parameter given the response variable belongs to the same parametric family as the mixing distribution.  相似文献   

16.
When combining estimates of a common parameter (of dimension d?1d?1) from independent data sets—as in stratified analyses and meta analyses—a weighted average, with weights ‘proportional’ to inverse variance matrices, is shown to have a minimal variance matrix (a standard fact when d=1d=1)—minimal in the sense that all convex combinations of the coordinates of the combined estimate have minimal variances. Minimum variance for the estimation of a single coordinate of the parameter can therefore be achieved by joint estimation of all coordinates using matrix weights. Moreover, if each estimate is asymptotically efficient within its own data set, then this optimally weighted average, with consistently estimated weights, is shown to be asymptotically efficient in the combined data set and avoids the need to merge the data sets and estimate the parameter in question afresh. This is so whatever additional non-common nuisance parameters may be in the models for the various data sets. A special case of this appeared in Fisher [1925. Theory of statistical estimation. Proc. Cambridge Philos. Soc. 22, 700–725.]: Optimal weights are ‘proportional’ to information matrices, and he argued that sample information should be used as weights rather than expected information, to maintain second-order efficiency of maximum likelihood. A number of special cases have appeared in the literature; we review several of them and give additional special cases, including stratified regression analysis—proportional-hazards, logistic or linear—, combination of independent ROC curves, and meta analysis. A test for homogeneity of the parameter across the data sets is also given.  相似文献   

17.
18.
Abstract

In a 2-step monotone missing dataset drawn from a multivariate normal population, T2-type test statistic (similar to Hotelling’s T2 test statistic) and likelihood ratio (LR) are often used for the test for a mean vector. In complete data, Hotelling’s T2 test and LR test are equivalent, however T2-type test and LR test are not equivalent in the 2-step monotone missing dataset. Then we interest which statistic is reasonable with relation to power. In this paper, we derive asymptotic power function of both statistics under a local alternative and obtain an explicit form for difference in asymptotic power function. Furthermore, under several parameter settings, we compare LR and T2-type test numerically by using difference in empirical power and in asymptotic power function. Summarizing obtained results, we recommend applying LR test for testing a mean vector.  相似文献   

19.
Continuous non-Gaussian stationary processes of the OU-type are becoming increasingly popular given their flexibility in modelling stylized features of financial series such as asymmetry, heavy tails and jumps. The use of non-Gaussian marginal distributions makes likelihood analysis of these processes unfeasible for virtually all cases of interest. This paper exploits the self-decomposability of the marginal laws of OU processes to provide explicit expressions of the characteristic function which can be applied to several models as well as to develop efficient estimation techniques based on the empirical characteristic function. Extensions to OU-based stochastic volatility models are provided.  相似文献   

20.
Recently Jammalamadaka and Mangalam [2003. Non-parametric estimation for middle censored data. J. Nonparametric Statist. 15, 253–265] introduced a general censoring scheme called the “middle-censoring” scheme in non-parametric set up. In this paper we consider this middle-censoring scheme when the lifetime distribution of the items is exponentially distributed and the censoring mechanism is independent and non-informative. In this set up, we derive the maximum likelihood estimator and study its consistency and asymptotic normality properties. We also derive the Bayes estimate of the exponential parameter under a gamma prior. Since a theoretical construction of the credible interval becomes quite difficult, we propose and implement Gibbs sampling technique to construct the credible intervals. Monte Carlo simulations are performed to evaluate the small sample behavior of the techniques proposed. A real data set is analyzed to illustrate the practical application of the proposed methods.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号