首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Tail estimates are developed for power law probability distributions with exponential tempering, using a conditional maximum likelihood approach based on the upper-order statistics. Tempered power law distributions are intermediate between heavy power-law tails and Laplace or exponential tails, and are sometimes called “semi-heavy” tailed distributions. The estimation method is demonstrated on simulated data from a tempered stable distribution, and for several data sets from geophysics and finance that show a power law probability tail with some tempering.  相似文献   

2.
Continuing increases in computing power and availability mean that many maximum likelihood estimation (MLE) problems previously thought intractable or too computationally difficult can now be tackled numerically. However, ML parameter estimation for distributions whose only analytical expression is as quantile functions has received little attention. Numerical MLE procedures for parameters of new families of distributions, the g-and-k and the generalized g-and-h distributions, are presented and investigated here. Simulation studies are included, and the appropriateness of using asymptotic methods examined. Because of the generality of these distributions, the investigations are not only into numerical MLE for these distributions, but are also an initial investigation into the performance and problems for numerical MLE applied to quantile-defined distributions in general. Datasets are also fitted using the procedures here. Results indicate that sample sizes significantly larger than 100 should be used to obtain reliable estimates through maximum likelihood.  相似文献   

3.
A class of simultaneous tests based on the aligned rank transform (ART) statistics is proposed for linear functions of parameters in linear models. The asymptotic distributions are derived. The stability of the finite sample behaviour of the sampling distribution of the ART technique is studied by comparing the simulated upper quantiles of its sampling distribution with those of the multivariate t-distribution. Simulation also shows that the tests based on ART have excellent small sample properties and because of their robustness perform better than the methods based on the least-squares estimates.  相似文献   

4.
The power function distribution is often used to study the electrical component reliability. In this paper, we model a heterogeneous population using the two-component mixture of the power function distribution. A comprehensive simulation scheme including a large number of parameter points is followed to highlight the properties and behavior of the estimates in terms of sample size, censoring rate, parameters size and the proportion of the components of the mixture. The parameters of the power function mixture are estimated and compared using the Bayes estimates. A simulated mixture data with censored observations is generated by probabilistic mixing for the computational purposes. Elegant closed form expressions for the Bayes estimators and their variances are derived for the censored sample as well as for the complete sample. Some interesting comparison and properties of the estimates are observed and presented. The system of three non-linear equations, required to be solved iteratively for the computations of maximum likelihood (ML) estimates, is derived. The complete sample expressions for the ML estimates and for their variances are also given. The components of the information matrix are constructed as well. Uninformative as well as informative priors are assumed for the derivation of the Bayes estimators. A real-life mixture data example has also been discussed. The posterior predictive distribution with the informative Gamma prior is derived, and the equations required to find the lower and upper limits of the predictive intervals are constructed. The Bayes estimates are evaluated under the squared error loss function.  相似文献   

5.
Abstract. We introduce fully non‐parametric two‐sample tests for testing the null hypothesis that the samples come from the same distribution if the values are only indirectly given via current status censoring. The tests are based on the likelihood ratio principle and allow the observation distributions to be different for the two samples, in contrast with earlier proposals for this situation. A bootstrap method is given for determining critical values and asymptotic theory is developed. A simulation study, using Weibull distributions, is presented to compare the power behaviour of the tests with the power of other non‐parametric tests in this situation.  相似文献   

6.
In this paper we present methods for inference on data selected by a complex sampling design for a class of statistical models for the analysis of ordinal variables. Specifically, assuming that the sampling scheme is not ignorable, we derive for the class of cub models (Combination of discrete Uniform and shifted Binomial distributions) variance estimates for a complex two stage stratified sample. Both Taylor linearization and repeated replication variance estimators are presented. We also provide design‐based test diagnostics and goodness‐of‐fit measures. We illustrate by means of real data analysis the differences between survey‐weighted and unweighted point estimates and inferences for cub model parameters.  相似文献   

7.
Heterogeneity of variances of treatment groups influences the validity and power of significance tests of location in two distinct ways. First, if sample sizes are unequal, the Type I error rate and power are depressed if a larger variance is associated with a larger sample size, and elevated if a larger variance is associated with a smaller sample size. This well-established effect, which occurs in t and F tests, and to a lesser degree in nonparametric rank tests, results from unequal contributions of pooled estimates of error variance in the computation of test statistics. It is observed in samples from normal distributions, as well as non-normal distributions of various shapes. Second, transformation of scores from skewed distributions with unequal variances to ranks produces differences in the means of the ranks assigned to the respective groups, even if the means of the initial groups are equal, and a subsequent inflation of Type I error rates and power. This effect occurs for all sample sizes, equal and unequal. For the t test, the discrepancy diminishes, and for the Wilcoxon–Mann–Whitney test, it becomes larger, as sample size increases. The Welch separate-variance t test overcomes the first effect but not the second. Because of interaction of these separate effects, the validity and power of both parametric and nonparametric tests performed on samples of any size from unknown distributions with possibly unequal variances can be distorted in unpredictable ways.  相似文献   

8.
DISTRIBUTIONAL CHARACTERIZATIONS THROUGH SCALING RELATIONS   总被引:2,自引:0,他引:2  
Investigated here are aspects of the relation between the laws of X and Y where X is represented as a randomly scaled version of Y. In the case that the scaling has a beta law, the law of Y is expressed in terms of the law of X. Common continuous distributions are characterized using this beta scaling law, and choosing the distribution function of Y as a weighted version of the distribution function of X, where the weight is a power function. It is shown, without any restriction on the law of the scaling, but using a one‐parameter family of weights which includes the power weights, that characterizations can be expressed in terms of known results for the power weights. Characterizations in the case where the distribution function of Y is a positive power of the distribution function of X are examined in two special cases. Finally, conditions are given for existence of inverses of the length‐bias and stationary‐excess operators.  相似文献   

9.
The proportional hazards model is the most commonly used model in regression analysis of failure time data and has been discussed by many authors under various situations (Kalbfleisch & Prentice, 2002. The Statistical Analysis of Failure Time Data, Wiley, New York). This paper considers the fitting of the model to current status data when there exist competing risks, which often occurs in, for example, medical studies. The maximum likelihood estimates of the unknown parameters are derived and their consistency and convergence rate are established. Also we show that the estimates of regression coefficients are efficient and have asymptotically normal distributions. Simulation studies are conducted to assess the finite sample properties of the estimates and an illustrative example is provided. The Canadian Journal of Statistics © 2009 Statistical Society of Canada  相似文献   

10.
Data analysts frequently calculate power and sample size for a planned study using mean and variance estimates from an initial trial. Hence power,or the sample size needed to achieve a fixed power, varies randomly. Such claculations can be very inaccurate in the General Linear Univeriate Model (GLUM). Biased noncentrality estimators and censored power calculations create inaccuracy. Censoring occurs if only certain outcomes of an initial trial lead to a power calculation. For example, a confirmatory study may be planned (and a sample size estimated) only following a significant resulte in the initial trial.

Computing accurate point estimates or confidence bounds of GLUM noncentrality, power, or sample size in the presence of censoring involves truncated noncentral F distributions. We recommed confidence bounds, whether or not censoring occurs. A power analysis of data from humans exposed to carbon monoxide demonstrates the substantial impact on samle size that may occur. The results highlight potential; biases and should aid study planning and interpretation.  相似文献   

11.
Log‐normal linear regression models are popular in many fields of research. Bayesian estimation of the conditional mean of the dependent variable is problematic as many choices of the prior for the variance (on the log‐scale) lead to posterior distributions with no finite moments. We propose a generalized inverse Gaussian prior for this variance and derive the conditions on the prior parameters that yield posterior distributions of the conditional mean of the dependent variable with finite moments up to a pre‐specified order. The conditions depend on one of the three parameters of the suggested prior; the other two have an influence on inferences for small and medium sample sizes. A second goal of this paper is to discuss how to choose these parameters according to different criteria including the optimization of frequentist properties of posterior means.  相似文献   

12.
Information before unblinding regarding the success of confirmatory clinical trials is highly uncertain. Current techniques using point estimates of auxiliary parameters for estimating expected blinded sample size: (i) fail to describe the range of likely sample sizes obtained after the anticipated data are observed, and (ii) fail to adjust to the changing patient population. Sequential MCMC-based algorithms are implemented for purposes of sample size adjustments. The uncertainty arising from clinical trials is characterized by filtering later auxiliary parameters through their earlier counterparts and employing posterior distributions to estimate sample size and power. The use of approximate expected power estimates to determine the required additional sample size are closely related to techniques employing Simple Adjustments or the EM algorithm. By contrast with these, our proposed methodology provides intervals for the expected sample size using the posterior distribution of auxiliary parameters. Future decisions about additional subjects are better informed due to our ability to account for subject response heterogeneity over time. We apply the proposed methodologies to a depression trial. Our proposed blinded procedures should be considered for most studies due to ease of implementation.  相似文献   

13.
In this article we consider the sample size determination problem in the context of robust Bayesian parameter estimation of the Bernoulli model. Following a robust approach, we consider classes of conjugate Beta prior distributions for the unknown parameter. We assume that inference is robust if posterior quantities of interest (such as point estimates and limits of credible intervals) do not change too much as the prior varies in the selected classes of priors. For the sample size problem, we consider criteria based on predictive distributions of lower bound, upper bound and range of the posterior quantity of interest. The sample size is selected so that, before observing the data, one is confident to observe a small value for the posterior range and, depending on design goals, a large (small) value of the lower (upper) bound of the quantity of interest. We also discuss relationships with and comparison to non robust and non informative Bayesian methods.  相似文献   

14.
A computational problem in many fields is to estimate simultaneously multiple integrals and expectations, assuming that the data are generated by some Monte Carlo algorithm. Consider two scenarios in which draws are simulated from multiple distributions but the normalizing constants of those distributions may be known or unknown. For each scenario, existing estimators can be classified as using individual samples separately or using all the samples jointly. The latter pooled‐sample estimators are statistically more efficient but computationally more costly to evaluate than the separate‐sample estimators. We develop a cluster‐sample approach to obtain computationally effective estimators, after draws are generated for each scenario. We divide all the samples into mutually exclusive clusters and combine samples from each cluster separately. Furthermore, we exploit a relationship between estimators based on samples from different clusters to achieve variance reduction. The resulting estimators, compared with the pooled‐sample estimators, typically yield similar statistical efficiency but have reduced computational cost. We illustrate the value of the new approach by two examples for an Ising model and a censored Gaussian random field. The Canadian Journal of Statistics 41: 151–173; 2013 © 2012 Statistical Society of Canada  相似文献   

15.
In this paper, progressive-stress accelerated life tests are applied when the lifetime of a product under design stress follows the exponentiated distribution [G(x)]α. The baseline distribution, G(x), follows a general class of distributions which includes, among others, Weibull, compound Weibull, power function, Pareto, Gompertz, compound Gompertz, normal and logistic distributions. The scale parameter of G(x) satisfies the inverse power law and the cumulative exposure model holds for the effect of changing stress. A special case for an exponentiated exponential distribution has been discussed. Using type-II progressive hybrid censoring and MCMC algorithm, Bayes estimates of the unknown parameters based on symmetric and asymmetric loss functions are obtained and compared with the maximum likelihood estimates. Normal approximation and bootstrap confidence intervals for the unknown parameters are obtained and compared via a simulation study.  相似文献   

16.
When testing hypotheses in two-sample problems, the Wilcoxon rank-sum test is often used to test the location parameter, and this test has been discussed by many authors over the years. One modification of the Wilcoxon rank-sum test was proposed by Tamura [On a modification of certain rank tests. Ann Math Stat. 1963;34:1101–1103]. Deriving the exact critical value of the statistic is difficult when the sample sizes are increased. The normal approximation, the Edgeworth expansion, the saddlepoint approximation, and the permutation test were used to evaluate the upper tail probability for the modified Wilcoxon rank-sum test given finite sample sizes. The accuracy of various approximations to the probability of the modified Wilcoxon statistic was investigated. Simulations were used to investigate the power of the modified Wilcoxon rank-sum test for the one-sided alternative with various population distributions for small sample sizes. The method was illustrated by the analysis of real data.  相似文献   

17.
Liu and Singh (1993, 2006) introduced a depth‐based d‐variate extension of the nonparametric two sample scale test of Siegel and Tukey (1960). Liu and Singh (2006) generalized this depth‐based test for scale homogeneity of k ≥ 2 multivariate populations. Motivated by the work of Gastwirth (1965), we propose k sample percentile modifications of Liu and Singh's proposals. The test statistic is shown to be asymptotically normal when k = 2, and compares favorably with Liu and Singh (2006) if the underlying distributions are either symmetric with light tails or asymmetric. In the case of skewed distributions considered in this paper the power of the proposed tests can attain twice the power of the Liu‐Singh test for d ≥ 1. Finally, in the k‐sample case, it is shown that the asymptotic distribution of the proposed percentile modified Kruskal‐Wallis type test is χ2 with k ? 1 degrees of freedom. Power properties of this k‐sample test are similar to those for the proposed two sample one. The Canadian Journal of Statistics 39: 356–369; 2011 © 2011 Statistical Society of Canada  相似文献   

18.
A model which explains data that is subject to sudden structural changes of unspecified nature is presented. The structural shifts are generated by a random walk component whose innovations belong to the normal domain of attraction of a symmetric stable law. To test the model against the stationarity case, several non-parametric, and regression-based statistics are studied. The non-parametric tests are a generalization of the variance ratio test to innovations with heavy-tailed distributions. The tests are consistent and shown to have good finite sample size and power properties and are applied to a set of economic variables.  相似文献   

19.
WILCOXON-TYPE RANK-SUM PRECEDENCE TESTS   总被引:1,自引:0,他引:1  
This paper introduces Wilcoxon‐type rank‐sum precedence tests for testing the hypothesis that two life‐time distribution functions are equal. They extend the precedence life‐test first proposed by Nelson in 1963. The paper proposes three Wilcoxon‐type rank‐sum precedence test statistics—the minimal, maximal and expected rank‐sum statistics—and derives their null distributions. Critical values are presented for some combinations of sample sizes, and the exact power function is derived under the Lehmann alternative. The paper examines the power properties of the Wilcoxon‐type rank‐sum precedence tests under a location‐shift alternative through Monte Carlo simulations, and it compares the power of the precedence test, the maximal precedence test and Wilcoxon rank‐sum test (based on complete samples). Two examples are presented for illustration.  相似文献   

20.
Abstract

In this paper, we consider a by-claim risk model with a constant rate of interest force, in which the main claims and the by-claims form a sequence of pTQAI nonnegative random variables and all their distributions belong to the dominatedly-varying heavy-tailed subclass. We obtain the asymptotically upper and lower bound formulas of the ultimate ruin probability for such a by-claim risk model. As its by-products, some interesting properties for pTQAI structure are also investigated. The results extend some existing ones in the literature.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号