首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
An interesting topic in mathematical statistics is that of constructing confidence intervals. Two types of intervals, both based on the method of pivotal quantity, are available: the Shortest Confidence Interval (SCI) and the Equal Tails Confidence Interval (ETCI). The aims of this article are: (i) to clarify and comment on methods of finding such intervals; (ii) to investigate the relationship between these types of intervals; (iii) to point out that confidence intervals with the shortest length do not always exist, even when the distribution of the pivotal quantity is symmetric; and finally, (iv) to give similar results when the Bayesian approach is used.  相似文献   

2.
The Weibull distribution is composited with Pareto model to obtain a flexible, reliable long-tailed parametric distribution for modeling unimodal failure rate data. The hazard function of the composite family accommodates decreasing and unimodal failure rates, which are separated by the boundary line of the space of shape parameter, gamma, when it equals to a known constant. The least square and maximum likelihood parameter estimation techniques are discussed. The advantages of using the proposed family are demonstrated and compared by illustrating well-known examples: guinea pigs survival time data, head and neck cancer data, and nasopharynx cancer survival data.  相似文献   

3.
The two-parameter Inverse Gaussian (IG) distribution is often appropriate for modeling non negative right-skewed data due to the striking similarities with the Gaussian distribution in its basic properties and inference methods. There are about 40 such G-IG analogies developed in literature and were most recently tabulated by Mudholkar and Wang. Of these, the earliest and most commonly noted similarities are the significance tests based on student's t and F distribution for the homogeneity of one, two or several means of the IG populations. However, unlike the corresponding tests in Gaussian theory, little is known about the power function of the basic tests. In this article, we employ the IG-related root-reciprocal IG distribution and a notion of Reciprocal Symmetry to establish the monotonicity of the power function of the test of significance for the IG mean.  相似文献   

4.
A paramecer-free Bernstein-type upper bound is derived for the probability that the sum S of n i.i.d, unimodal random variables with finite support, X1 ,X2,…,Xn, exceeds its mean E(S) by the positive value nt. The bound for P{S - nμ ≥ nt} depends on the range of the summands, the sample size n, the positive number t, and the type of unimodality assumed for Xi. A two-sided Gauss-type probability inequality for sums of strongly unimodal random variables is also given. The new bounds are contrasted to Hoeffding's inequality for bounded random variables and to the Bienayme-Chebyshev inequality. Finally, the new inequalities are applied to a classic probability inequality example first published by Savage (1961).  相似文献   

5.
In this paper we introduce a new class of multivariate unimodal distributions, motivated by Khintchine's representation for unimodal densities on the real line. We start by introducing a new class of unimodal distributions which can then be naturally extended to higher dimensions, using the multivariate Gaussian copula. Under both univariate and multivariate settings, we provide MCMC algorithms to perform inference about the model parameters and predictive densities. The methodology is illustrated with univariate and bivariate examples, and with variables taken from a real data set.  相似文献   

6.
Reply     
This article presents a large class of probability densities f(x, θ) for which, with positive probability, the maximum likelihood estimator based on a sample of size 2 is non unique, and the possible values of do not form an interval. Such a density f(x, θ) can even be chosen to be unimodal, and one such example is the Cauchy density with a location parameter. A discrete version of the argument gives examples in which the nonuniqueness of the maximum likelihood estimator persists for samples of arbitrary size.  相似文献   

7.
We obtain upper bounds on the variance of discrete unimodal distributions. The alternative proofs of the corresponding bounds for the continuous unimodal distributions are also given.  相似文献   

8.
A method is proposed for shape-constrained density estimation under a variety of constraints, including but not limited to unimodality, monotonicity, symmetry, and constraints on the number of inflection points of the density or its derivative. The method involves computing an adjustment curve that is used to bring a pre-existing pilot estimate into conformance with the specified shape restrictions. The pilot estimate may be obtained using any preferred estimator, and the optimal adjustment can be computed using fast, readily-available quadratic programming routines. This makes the proposed procedure generic and easy to implement.  相似文献   

9.
The inverse Weibull distribution is one of the widely applied distribution for problems in reliability theory. In this article, we introduce a generalization—referred to as the Beta Inverse-Weibull distribution—generated from the logit of a beta random variable. We provide a comprehensive treatment of the mathematical properties of the Beta Inverse-Weibull distribution. The shapes of the corresponding probability density function and the hazard rate function have been obtained and graphical illustrations have been given. The distribution is found to be unimodal. Results for the non central moments are obtained. The relationship between the parameters and the mean, variance, skewness, and kurtosis are provided. The method of maximum likelihood is proposed for estimating the model parameters. We hope that this generalization will attract wider applicability to the problems in reliability theory and mechanical engineering.  相似文献   

10.
This paper proposes a new nonparametric unimodal estimator of a unimodal probability density function, in the case where the mode is known. The classical solution to this problem is the maximum-likelihood estimator under monotonicity constraint, considered by Grenander (1956). Our approach is based on a unimodal rearrangement of the kernel estimator of the density. Asymptotic properties of this estimator are studied, and its small-sample behaviour is examined through simulations.  相似文献   

11.
We apply statistical selection theory to multiple target detection problems by analyzing the Mahalanobis distances between multivariate normal populations and a desired standard (a known characteristic of a target). We want to achieve the goal of selecting a subset that contains no non target (negative) sites, which entails screening out all non targets. Correct selection (CS) is defined according to this goal. We consider two cases: (1) that all covariance matrices are known; and (2) that all covariance matrices are unknown, including both heteroscedastic and homoscedastic cases. Optimal selection procedures are proposed in order to reach the selection goal. The least favorable configurations (LFC) are found. Tables and figures are presented to illustrate the properties of our proposed procedures. Simulation examples are given to show that our procedures work well. The log-concavity results of the operating characteristic functions are also given.  相似文献   

12.
In an experiment of treatment selections, random samples are drawn from k populations with ordered means. The probability that a sample statistic from the population with the highest mean turns out to be ranked the highest is referred to as the probability of correct selection (PCS). An inequality was proved previously that shows the monotonicity of PCS with respect to change in variance of the samples. In this article, we first present a more general form of the probability inequality to be used to investigate PCS. An extension of the monotonicity of PCS to order statistics is considered. We show that the PCS of the smallest order statistic preserves the monotonicity. Additionally, a normal approximation method is used to further generalize the theory. The general order statistics will not enjoy the same properties, as we reveal the obstacles, and a numerical counter example.  相似文献   

13.
Nonparametric tests of modality are a distribution-free way of assessing evidence about inhomogeneity in a population, provided that the potential sub populations are sufficiently well separated. They include the excess mass and dip tests, which are equivalent in univariate settings and are alternatives to the bandwidth test. Only very conservative forms of the excess mass and dip tests are available at presently, however, and for that reason they are generally not competitive with the bandwidth test. In the present paper we develop a practical approach to calibrating the excess mass and dip tests to improve their level accuracy and power substantially. Our method exploits the fact that the limiting distribution of the excess mass statistic under the null hypothesis depends on unknowns only through a constant, which may be estimated. Our calibrated test exploits this fact and is shown to have greater power and level accuracy than the bandwidth test has. The latter tends to be quite conservative, even in an asymptotic sense. Moreover, the calibrated test avoids difficulties that the bandwidth test has with spurious modes in the tails, which often must be discounted through subjective intervention of the experimenter.  相似文献   

14.
A reduced ‐statistic is a ‐statistic with its summands drawn from a restricted but balanced set of pairs. In this article, central limit theorems are derived for reduced ‐statistics under ‐mixing, which significantly extends the work of Brown & Kildea in various aspects. It will be shown and illustrated that reduced ‐statistics are quite useful in deriving test statistics in various nonparametric testing problems.  相似文献   

15.
Consider a linear regression model with n‐dimensional response vector, regression parameter and independent and identically distributed errors. Suppose that the parameter of interest is where a is a specified vector. Define the parameter where c and t are specified. Also suppose that we have uncertain prior information that . Part of our evaluation of a frequentist confidence interval for is the ratio (expected length of this confidence interval)/(expected length of standard confidence interval), which we call the scaled expected length of this interval. We say that a confidence interval for utilizes this uncertain prior information if: (i) the scaled expected length of this interval is substantially less than 1 when ; (ii) the maximum value of the scaled expected length is not too much larger than 1; and (iii) this confidence interval reverts to the standard confidence interval when the data happen to strongly contradict the prior information. Kabaila and Giri (2009) present a new method for finding such a confidence interval. Let denote the least squares estimator of . Also let and . Using computations and new theoretical results, we show that the performance of this confidence interval improves as increases and decreases.  相似文献   

16.
ActivStats 2.0.: Paul Velleman. Reading, MA: Addison-Wesley, 1998, CD-ROM, $44.25, ISBN: 0-201-31068-6.

The Active Practice of Statistics.: David S. Moore. New York: W. H. Freeman & Co., 1997, xv + 352 pp., $49.95, ISBN: ISBN 0-7167-3140-1. Reviewed by Jon Maatta

Introduction to Statistical Reasoning.: Gary Smith. Boston: McGraw-Hill, 1998, xviii + 654 pp., $46.00, ISBN: 0-07-059276-4. Reviewed by Robert W. Hayden

Modern Applied Statistics with S-Plus (2nd ed.).: William N. Venables and Brian D. Ripley. New York: Springer-Verlag, 1997, xvii + 548 pp., $59.95, ISBN: 0-387-98214-0. Reviewed by Karen Kafadar James R. Koehler

A Course in Mathematical Statistics (2nd ed.): George C. Roussas. San Diego: Academic Press, 1997, xx + 572 pp., $59.95, ISBN: 0125993153. Reviewed by David W. Macky  相似文献   

17.
Efficient, accurate, and fast Markov Chain Monte Carlo estimation methods based on the Implicit approach are proposed. In this article, we introduced the notion of Implicit method for the estimation of parameters in Stochastic Volatility models.

Implicit estimation offers a substantial computational advantage for learning from observations without prior knowledge and thus provides a good alternative to classical inference in Bayesian method when priors are missing.

Both Implicit and Bayesian approach are illustrated using simulated data and are applied to analyze daily stock returns data on CAC40 index.  相似文献   


18.
19.
This paper deals with the study of dependencies between two given events modelled by point processes. In particular, we focus on the context of DNA to detect favoured or avoided distances between two given motifs along a genome suggesting possible interactions at a molecular level. For this, we naturally introduce a so‐called reproduction function h that allows to quantify the favoured positions of the motifs and that is considered as the intensity of a Poisson process. Our first interest is the estimation of this function h assumed to be well localized. The estimator based on random thresholds achieves an oracle inequality. Then, minimax properties of on Besov balls are established. Some simulations are provided, proving the good practical behaviour of our procedure. Finally, our method is applied to the analysis of the dependence between promoter sites and genes along the genome of the Escherichia coli bacterium.  相似文献   

20.
Let {N(t), t > 0} be a Poisson process with rate λ > 0, independent of the independent and identically distributed random variables with mean μ and variance . The stochastic process is then called a compound Poisson process and has a wide range of applications in, for example, physics, mining, finance and risk management. Among these applications, the average number of objects, which is defined to be λμ, is an important quantity. Although many papers have been devoted to the estimation of λμ in the literature, in this paper, we use the well‐known empirical likelihood method to construct confidence intervals. The simulation results show that the empirical likelihood method often outperforms the normal approximation and Edgeworth expansion approaches in terms of coverage probabilities. A real data set concerning coal‐mining disasters is analyzed using these methods.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号