首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 859 毫秒
1.
The Buehler 1 –α upper confidence limit is as small as possible, subject to the constraints that its coverage probability is at least 1 –α and that it is a non‐decreasing function of a pre‐specified statistic T. This confidence limit has important biostatistical and reliability applications. Previous research has examined the way the choice of T affects the efficiency of the Buehler 1 –α upper confidence limit for a given value of α. This paper considers how T should be chosen when the Buehler limit is to be computed for a range of values of α. If T is allowed to depend on α then the Buehler limit is not necessarily a non‐increasing function of α, i.e. the limit is ‘non‐nesting’. Furthermore, non‐nesting occurs in standard and practical examples. Therefore, if the limit is to be computed for a range [αL, αU]of values of α, this paper suggests that T should be a carefully chosen approximate 1 –αL upper limit for θ. The choice leads to Buehler limits that have high statistical efficiency and are nesting.  相似文献   

2.
When the data are discrete, standard approximate confidence limits often have coverage well below nominal for some parameter values. While ad hoc adjustments may largely solve this problem for particular cases, Kabaila & Lloyd (1997) gave a more systematic method of adjustment which leads to tight upper limits, which have coverage which is never below nominal and are as small as possible within a particular class. However, their computation for all but the simplest models is infeasible. This paper suggests modifying tight upper limits by an initial replacement of the unknown nuisance parameter vector by its profile maximum likelihood estimator. While the resulting limits no longer possess the optimal properties of tight limits exactly, the paper presents both numerical and theoretical evidence that the resulting coverage function is close to optimal. Moreover these profile upper limits are much (possibly many orders of magnitude) easier to compute than tight upper limits.  相似文献   

3.
Let X be a continuous nonnegative random variable with finite first and second moments and a continuous pdf that is positive on the interior of its support. A nonzero limiting density at the origin and a coefficient of variation (CV) greater than 1 are shown to be sufficient conditions for the distribution truncated below at t > 0 to have a variance greater than the variance of the full distribution. Distributions that satisfy these conditions include those with decreasing hazard rates (e.g., the gamma and Weibull distributions with shape parameters less than 1) and the beta distribution with parameter values p and q for which q > p(p + q + 1). The bound T for which truncation at 0 < t < T increases the variance relative to the full distribution is shown to be greater than the (1 — 1/CV)th percentile of the full distribution.  相似文献   

4.
We consider the problem of finding an upper 1 –α confidence limit (α < ½) for a scalar parameter of interest θ in the presence of a nuisance parameter vector ψ when the data are discrete. Using a statistic T as a starting point, Kabaila & Lloyd (1997) define what they call the tight upper limit with respect to T . This tight upper limit possesses certain attractive properties. However, these properties provide very little guidance on the choice of T itself. The practical recommendation made by Kabaila & Lloyd (1997) is that T be an approximate upper 1 –α confidence limit for θ rather than, say, an approximately median unbiased estimator of θ. We derive a large sample approximation which provides strong theoretical support for this recommendation.  相似文献   

5.
A Bayesian analysis is provided for the Wilcoxon signed-rank statistic (T+). The Bayesian analysis is based on a sign-bias parameter φ on the (0, 1) interval. For the case of a uniform prior probability distribution for φ and for small sample sizes (i.e., 6 ? n ? 25), values for the statistic T+ are computed that enable probabilistic statements about φ. For larger sample sizes, approximations are provided for the asymptotic likelihood function P(T+|φ) as well as for the posterior distribution P(φ|T+). Power analyses are examined both for properly specified Gaussian sampling and for misspecified non Gaussian models. The new Bayesian metric has high power efficiency in the range of 0.9–1 relative to a standard t test when there is Gaussian sampling. But if the sampling is from an unknown and misspecified distribution, then the new statistic still has high power; in some cases, the power can be higher than the t test (especially for probability mixtures and heavy-tailed distributions). The new Bayesian analysis is thus a useful and robust method for applications where the usual parametric assumptions are questionable. These properties further enable a way to do a generic Bayesian analysis for many non Gaussian distributions that currently lack a formal Bayesian model.  相似文献   

6.
Let T be a random variable having an absolutely continuous distribution function. It is known that linearity of E(T | T > t) can be used to characterize distributions such as exponential, power and Pareto distribution. In this work, we will extend the above results. More precisely, we characterize the distribution of T by using certain relationships of conditional moments of T. Our results can also be used to obtain new characterization of distributions based on adjacent order statistics or record values.  相似文献   

7.
Consider estimation of a unit vector parameter a in two classes of distributions. In the first, α is a direction. In the second, α is an axis, so that –α and α are equivalent: the aim is to obtain the projector ααt. In each case the paper uses first principles to define measures of the divergence of such estimators and derives lower bounds for them. These bounds are computed explicitly for the Fisher-Von Mises and Scheidegger-Watson densities on the g-dimensional sphere, ωq. In the latter case, the tightness of the bound is established by simulations.  相似文献   

8.
Consider the process with, cf. (1.2) on page 265 in B1, X1, …, XN a sample from a distribution F and, for i = 1, …, N, R |x 1 , - q 1 ø| , the rank of |X1 - q1ø| among |X1 - q1ø|, …, |XN - qNø|. It is shown that, under certain regularity conditions on F and on the constants pi and qi, TøN(t) is asymptotically approximately a linear function of ø uniformly in t and in ø for |ø| ≤ C. The special case where the pi and the qi, are independent of i is considered.  相似文献   

9.
This article explores the calculation of tolerance limits for the Poisson regression model based on the profile likelihood methodology and small-sample asymptotic corrections to improve the coverage probability performance. The data consist of n counts, where the mean or expected rate depends upon covariates via the log regression function. This article evaluated upper tolerance limits as a function of covariates. The upper tolerance limits are obtained from upper confidence limits of the mean. To compute upper confidence limits the following methodologies were considered: likelihood based asymptotic methods, small-sample asymptotic methods to improve the likelihood based methodology, and the delta method. Two applications are discussed: one application relating to defects in semiconductor wafers due to plasma etching and the other examining the number of surface faults in upper seams of coal mines. All three methodologies are illustrated for the two applications.  相似文献   

10.
This paper considers a linear regression model with regression parameter vector β. The parameter of interest is θ= aTβ where a is specified. When, as a first step, a data‐based variable selection (e.g. minimum Akaike information criterion) is used to select a model, it is common statistical practice to then carry out inference about θ, using the same data, based on the (false) assumption that the selected model had been provided a priori. The paper considers a confidence interval for θ with nominal coverage 1 ‐ α constructed on this (false) assumption, and calls this the naive 1 ‐ α confidence interval. The minimum coverage probability of this confidence interval can be calculated for simple variable selection procedures involving only a single variable. However, the kinds of variable selection procedures used in practice are typically much more complicated. For the real‐life data presented in this paper, there are 20 variables each of which is to be either included or not, leading to 220 different models. The coverage probability at any given value of the parameters provides an upper bound on the minimum coverage probability of the naive confidence interval. This paper derives a new Monte Carlo simulation estimator of the coverage probability, which uses conditioning for variance reduction. For these real‐life data, the gain in efficiency of this Monte Carlo simulation due to conditioning ranged from 2 to 6. The paper also presents a simple one‐dimensional search strategy for parameter values at which the coverage probability is relatively small. For these real‐life data, this search leads to parameter values for which the coverage probability of the naive 0.95 confidence interval is 0.79 for variable selection using the Akaike information criterion and 0.70 for variable selection using Bayes information criterion, showing that these confidence intervals are completely inadequate.  相似文献   

11.
Suppose that X is a discrete random variable whose possible values are {0, 1, 2,⋯} and whose probability mass function belongs to a family indexed by the scalar parameter θ . This paper presents a new algorithm for finding a 1 − α confidence interval for θ based on X which possesses the following three properties: (i) the infimum over θ of the coverage probability is 1 − α ; (ii) the confidence interval cannot be shortened without violating the coverage requirement; (iii) the lower and upper endpoints of the confidence intervals are increasing functions of the observed value x . This algorithm is applied to the particular case that X has a negative binomial distribution.  相似文献   

12.
In statistical inference on the drift parameter a in the fractional Brownian motion WHt with the Hurst parameter H ∈ (0, 1) with a constant drift YHt = at + WHt, there is a large number of options how to do it. We may, for example, base this inference on the properties of the standard normal distribution applied to the differences between the observed values of the process at discrete times. Although such methods are very simple, it turns out that more appropriate is to use inverse methods. Such methods can be generalized to non constant drift. For the hypotheses testing about the drift parameter a, it is more proper to standardize the observed process, and to use inverse methods based on the first exit time of the observed process of a pre-specified interval until some given time. These procedures are illustrated, and their times of decision are compared against the direct approach. Other generalizations are possible when the random part is a symmetric stochastic integral of a known, deterministic function with respect to fractional Brownian motion.  相似文献   

13.
Two classes of estimators of a location parameter ø0 are proposed, based on a nonnegative functional H1* of the pair (D1øN, GøN), where and where FN is the sample distribution function. The estimators of the first class are defined as a value of ø minimizing H1*; the estimators of the second class are linearized versions of those of the first. The asymptotic distribution of the estimators is derived, and it is shown that the Kolmogorov-Smirnov statistic, the signed linear rank statistics, and the Cramérvon Mises statistics are special cases of such functionals H1*;. These estimators are closely related to the estimators of a shift in the two-sample case, proposed and studied by Boulanger in B2 (pp. 271–284).  相似文献   

14.
The Fisher exact test has been unjustly dismissed by some as ‘only conditional,’ whereas it is unconditionally the uniform most powerful test among all unbiased tests, tests of size α and with power greater than its nominal level of significance α. The problem with this truly optimal test is that it requires randomization at the critical value(s) to be of size α. Obviously, in practice, one does not want to conclude that ‘with probability x the we have a statistical significant result.’ Usually, the hypothesis is rejected only if the test statistic's outcome is more extreme than the critical value, reducing the actual size considerably.

The randomized unconditional Fisher exact is constructed (using Neyman–structure arguments) by deriving a conditional randomized test randomizing at critical values c(t) by probabilities γ(t), that both depend on the total number of successes T (the complete-sufficient statistic for the nuisance parameter—the common success probability) conditioned upon.

In this paper, the Fisher exact is approximated by deriving nonrandomized conditional tests with critical region including the critical value only if γ (t) > γ0, for a fixed threshold value γ0, such that the size of the unconditional modified test is for all value of the nuisance parameter—the common success probability—smaller, but as close as possible to α. It will be seen that this greatly improves the size of the test as compared with the conservative nonrandomized Fisher exact test.

Size, power, and p value comparison with the (virtual) randomized Fisher exact test, and the conservative nonrandomized Fisher exact, Pearson's chi-square test, with the more competitive mid-p value, the McDonald's modification, and Boschloo's modifications are performed under the assumption of two binomial samples.  相似文献   

15.
ABSTRACT

Based on the observed dual generalized order statistics drawn from an arbitrary unknown distribution, nonparametric two-sided prediction intervals as well as prediction upper and lower bounds for an ordinary and a dual generalized order statistic from another iid sequence with the same distribution are developed. The prediction intervals for dual generalized order statistics based on the observed ordinary generalized order statistics are also developed. The coverage probabilities of these prediction intervals are exact and free of the parent distribution, F. Finally, numerical computations and real examples of the coverage probabilities are presented for choosing the appropriate limits of the prediction.  相似文献   

16.
Among reliability systems, one of the basic systems is a parallel system. In this article, we consider a parallel system consisting of n identical components with independent lifetimes having a common distribution function F. Under the condition that the system has failed by time t, with t being 100pth percentile of F(t = F ?1(p), 0 < p < 1), we characterize the probability distributions based on the mean past lifetime of the components of the system. These distributions are described in the form of a specific shape on the left of t and arbitrary continuous function on the right tail.  相似文献   

17.
The classical bivariate F distribution arises from ratios of chi-squared random variables with common denominators. A consequent disadvantage is that its univariate F marginal distributions have one degree of freedom parameter in common. In this paper, we add a further independent chi-squared random variable to the denominator of one of the ratios and explore the extended bivariate F distribution, with marginals on arbitrary degrees of freedom, that results. Transformations linking F, beta and skew t distributions are then applied componentwise to produce bivariate beta and skew t distributions which also afford marginal (beta and skew t) distributions with arbitrary parameter values. We explore a variety of properties of these distributions and give an example of a potential application of the bivariate beta distribution in Bayesian analysis.  相似文献   

18.
Abstract. We study the Jeffreys prior and its properties for the shape parameter of univariate skew‐t distributions with linear and nonlinear Student's t skewing functions. In both cases, we show that the resulting priors for the shape parameter are symmetric around zero and proper. Moreover, we propose a Student's t approximation of the Jeffreys prior that makes an objective Bayesian analysis easy to perform. We carry out a Monte Carlo simulation study that demonstrates an overall better behaviour of the maximum a posteriori estimator compared with the maximum likelihood estimator. We also compare the frequentist coverage of the credible intervals based on the Jeffreys prior and its approximation and show that they are similar. We further discuss location‐scale models under scale mixtures of skew‐normal distributions and show some conditions for the existence of the posterior distribution and its moments. Finally, we present three numerical examples to illustrate the implications of our results on inference for skew‐t distributions.  相似文献   

19.
20.
Results of an exhaustive study of the bias of the least square estimator (LSE) of an first order autoregression coefficient α in a contaminated Gaussian model are presented. The model describes the following situation. The process is defined as Xt = α Xt-1 + Yt . Until a specified time T, Yt are iid normal N(0, 1). At the moment T we start our observations and since then the distribution of Yt, tT, is a Tukey mixture T(εσ) = (1 – ε)N(0,1) + εN(0, σ2). Bias of LSE as a function of α and ε, and σ2 is considered. A rather unexpected fact is revealed: given α and ε, the bias does not change montonically with σ (“the magnitude of the contaminant”), and similarly, given α and σ, the bias is not growing with ε (“the amount of contaminants”).  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号