首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 609 毫秒
1.
Consider the problem of finding an upper 1 –α confidence limit for a scalar parameter of interest ø in the presence of a nuisance parameter vector θ when the data are discrete. Approximate upper limits T may be found by approximating the relevant unknown finite sample distribution by its limiting distribution. Such approximate upper limits typically have coverage probabilities below, sometimes far below, 1 –α for certain values of (θ, ø). This paper remedies that defect by shifting the possible values t of T so that they are as small as possible subject both to the minimum coverage probability being greater than or equal to 1 –α, and to the shifted values being in the same order as the unshifted ts. The resulting upper limits are called ‘tight’. Under very weak and easily checked regularity conditions, a formula is developed for the tight upper limits.  相似文献   

2.
Under a two-parameter exponential distribution, this study constructs the generalized lower confidence limit of the lifetime performance index CL based on type-II right-censored data. The confidence limit has to be numerically obtained; however, the required computations are simple and straightforward. Confidence limits of CL computed under the generalized paradigm are compared with those of CL computed under the classical paradigm, citing an illustrative example with real data and two examples with simulated data, to demonstrate the merits and advantages of the proposed generalized variable method over the classical method.  相似文献   

3.
Linear functions of order statistics (“L-estimates”) of the form Tn =under jackknifing are investigated. This paper proves that with suitable conditions on the function J, the jackknifed version Tn of the L-estimate Tn has the same limit distribution as Tn. It is also shown that the jackknife estimate of the asymptotic variance of n1/2 is consistent. Furthermore, the Berry-Esséen rate associated with asymptotic normality, and a law of the iterated logarithm of a class of jackknife L-estimates, are characterized.  相似文献   

4.
The bootstrap variance estimate is widely used in semiparametric inferences. However, its theoretical validity is a well‐known open problem. In this paper, we provide a first theoretical study on the bootstrap moment estimates in semiparametric models. Specifically, we establish the bootstrap moment consistency of the Euclidean parameter, which immediately implies the consistency of t‐type bootstrap confidence set. It is worth pointing out that the only additional cost to achieve the bootstrap moment consistency in contrast with the distribution consistency is to simply strengthen the L1 maximal inequality condition required in the latter to the Lp maximal inequality condition for p≥1. The general Lp multiplier inequality developed in this paper is also of independent interest. These general conclusions hold for the bootstrap methods with exchangeable bootstrap weights, for example, non‐parametric bootstrap and Bayesian bootstrap. Our general theory is illustrated in the celebrated Cox regression model.  相似文献   

5.
Abstract. We propose a non‐linear density estimator, which is locally adaptive, like wavelet estimators, and positive everywhere, without a log‐ or root‐transform. This estimator is based on maximizing a non‐parametric log‐likelihood function regularized by a total variation penalty. The smoothness is driven by a single penalty parameter, and to avoid cross‐validation, we derive an information criterion based on the idea of universal penalty. The penalized log‐likelihood maximization is reformulated as an ?1‐penalized strictly convex programme whose unique solution is the density estimate. A Newton‐type method cannot be applied to calculate the estimate because the ?1‐penalty is non‐differentiable. Instead, we use a dual block coordinate relaxation method that exploits the problem structure. By comparing with kernel, spline and taut string estimators on a Monte Carlo simulation, and by investigating the sensitivity to ties on two real data sets, we observe that the new estimator achieves good L 1 and L 2 risk for densities with sharp features, and behaves well with ties.  相似文献   

6.
In 1957, R.J. Buehler gave a method of constructing honest upper confidence limits for a parameter that are as small as possible subject to a pre‐specified ordering restriction. In reliability theory, these ‘Buehler bounds’ play a central role in setting upper confidence limits for failure probabilities. Despite their stated strong optimality property, Buehler bounds remain virtually unknown to the wider statistical audience. This paper has two purposes. First, it points out that Buehler's construction is not well defined in general. However, a slightly modified version of the Buehler construction is minimal in a slightly weaker, but still compelling, sense. A proof is presented of the optimality of this modified Buehler construction under minimal regularity conditions. Second, the paper demonstrates that Buehler bounds can be expressed as the supremum of Buehler bounds conditional on any nuisance parameters, under very weak assumptions. This result is then used to demonstrate that Buehler bounds reduce to a trivial construction for the location‐scale model. This places important practical limits on the application of Buehler bounds and explains why they are not as well known as they deserve to be.  相似文献   

7.
One important property of any drug product is its stability over time. Drug stability studies are routinely carried out in the pharmaceutical industry in order to measure the degradation of an active pharmaceutical ingredient of a drug product. One important study objective is to estimate the shelf-life of the drug; the estimated shelf-life is required by the US Food and Drug Administration to be printed on the package label of the drug. This involves a suitable definition of the true shelf-life and the construction of an appropriate estimate of the true shelf-life. In this paper, the true shelf-life Tβ is defined as the time point at which 100β% of all the individual dosage units (e.g. tablets) of the drug have the active ingredient content no less than the lowest acceptable limit L, where β and L are prespecified constants. The value of Tβ depends on the parameters of the assumed degradation model of the active ingredient content and so is unknown. A lower confidence bound T?β for Tβ is then provided and used as the estimated shelf-life of the drug.  相似文献   

8.
Consider a linear regression model with unknown regression parameters β0 and independent errors of unknown distribution. Block the observations into q groups whose independent variables have a common value and measure the homogeneity of the blocks of residuals by a Cramér‐von Mises q‐sample statistic Tq(β). This statistic is designed so that its expected value as a function of the chosen regression parameter β has a minimum value of zero precisely at the true value β0. The minimizer β of Tq(β) over all β is shown to be a consistent estimate of β0. It is also shown that the bootstrap distribution of Tq0) can be used to do a lack of fit test of the regression model and to construct a confidence region for β0  相似文献   

9.
10.
For a confidence interval (L(X),U(X)) of a parameter θ in one-parameter discrete distributions, the coverage probability is a variable function of θ. The confidence coefficient is the infimum of the coverage probabilities, inf  θ P θ (θ∈(L(X),U(X))). Since we do not know which point in the parameter space the infimum coverage probability occurs at, the exact confidence coefficients are unknown. Beside confidence coefficients, evaluation of a confidence intervals can be based on the average coverage probability. Usually, the exact average probability is also unknown and it was approximated by taking the mean of the coverage probabilities at some randomly chosen points in the parameter space. In this article, methodologies for computing the exact average coverage probabilities as well as the exact confidence coefficients of confidence intervals for one-parameter discrete distributions are proposed. With these methodologies, both exact values can be derived.  相似文献   

11.
Consider estimation of a unit vector parameter a in two classes of distributions. In the first, α is a direction. In the second, α is an axis, so that –α and α are equivalent: the aim is to obtain the projector ααt. In each case the paper uses first principles to define measures of the divergence of such estimators and derives lower bounds for them. These bounds are computed explicitly for the Fisher-Von Mises and Scheidegger-Watson densities on the g-dimensional sphere, ωq. In the latter case, the tightness of the bound is established by simulations.  相似文献   

12.
A doubly censoring scheme occurs when the lifetimes T being measured, from a well-known time origin, are exactly observed within a window [L, R] of observational time and are otherwise censored either from above (right-censored observations) or below (left-censored observations). Sample data consists on the pairs (U, δ) where U = min{R, max{T, L}} and δ indicates whether T is exactly observed (δ = 0), right-censored (δ = 1) or left-censored (δ = −1). We are interested in the estimation of the marginal behaviour of the three random variables T, L and R based on the observed pairs (U, δ). We propose new nonparametric simultaneous marginal estimators [^(S)]T, [^(S)]L{\hat S_{T}, \hat S_{L}} and [^(S)]R{\hat S_{R}} for the survival functions of T, L and R, respectively, by means of an inverse-probability-of-censoring approach. The proposed estimators [^(S)]T, [^(S)]L{\hat S_{T}, \hat S_{L}} and [^(S)]R{\hat S_{R}} are not computationally intensive, generalize the empirical survival estimator and reduce to the Kaplan-Meier estimator in the absence of left-censored data. Furthermore, [^(S)]T{\hat S_{T}} is equivalent to a self-consistent estimator, is uniformly strongly consistent and asymptotically normal. The method is illustrated with data from a cohort of drug users recruited in a detoxification program in Badalona (Spain). For these data we estimate the survival function for the elapsed time from starting IV-drugs to AIDS diagnosis, as well as the potential follow-up time. A simulation study is discussed to assess the performance of the three survival estimators for moderate sample sizes and different censoring levels.  相似文献   

13.
ABSTRACT

Let T1: n ? T2: n ? ??? ? Tn: n be ordered lifetimes of components of a parallel system. In this article, the α-quantile past lifetime from the failure of the component with lifetime Tr: n provided that the system has failed at or before time t has been introduced. Then, some properties of this measure have been studied.  相似文献   

14.
Two consistent nonexact-confidence-interval estimation methods, both derived from the consistency-equivalence theorem in Plante (1991), are suggested for estimation of problematic parametric functions with no consistent exact solution and for which standard optimal confidence procedures are inadequate or even absurd, i.e., can provide confidence statements with a 95% empty or all-inclusive confidence set. A belt C(·) from a consistent nonexact-belt family, used with two confidence coefficients (γ = infθ Pθ [ θ ? C(X)] and γ+ = supθ Pθ[θ ? C(X)], is shown to provide a consistent nonexact-belt solution for estimating μ21 in the Behrens-Fisher problem. A rule for consistent behaviour enables any confidence belt to be used consistently by providing each sample point with best upper and lower confidence levels [δ+(x) ≥ γ+, δ(x) ≤ γ], which give least-conservative consistent confidence statements ranging from practically exact through informative to noninformative. The rule also provides a consistency correction L(x) = δ+(x)-δ(X) enabling alternative confidence solutions to be compared on grounds of adequacy; this is demonstrated by comparing consistent conservative sample-point-wise solutions with inconsistent standard solutions for estimating μ21 (Creasy-Fieller-Neyman problem) and $\sqrt {\mu _1^2 + \mu _2^2 }$, a distance-estimation problem closely related to Stein's 1959 example  相似文献   

15.
Nonparametric regression techniques such as spline smoothing and local fitting depend implicitly on a parametric model. For instance, the cubic smoothing spline estimate of a regression function ∫ μ based on observations ti, Yi is the minimizer of Σ{Yi ‐ μ(ti)}2 + λ∫(μ′′)2. Since ∫(μ″)2 is zero when μ is a line, the cubic smoothing spline estimate favors the parametric model μ(t) = αo + α1t. Here the authors consider replacing ∫(μ″)2 with the more general expression ∫(Lμ)2 where L is a linear differential operator with possibly nonconstant coefficients. The resulting estimate of μ performs well, particularly if Lμ is small. They present an O(n) algorithm for the computation of μ. This algorithm is applicable to a wide class of L's. They also suggest a method for the estimation of L. They study their estimates via simulation and apply them to several data sets.  相似文献   

16.
Abstract. We consider the problem of testing parametric assumptions in an inverse regression model with a convolution‐type operator. An L 2 ‐type goodness‐of‐fit test is proposed which compares the distance between a parametric and a non‐parametric estimate of the regression function. Asymptotic normality of the corresponding test statistic is shown under the null hypothesis and under a general non‐parametric alternative with different rates of convergence in both cases. The feasibility of the proposed test is demonstrated by means of a small simulation study. In particular, the power of the test against certain types of alternative is investigated. Finally, an empirical example is provided, in which the proposed methods are applied to the determination of the shape of the luminosity profile of the elliptical galaxy NGC 5017.  相似文献   

17.
One standard summary of a clinical trial is a confidence limit for the effect of the treatment. Unfortunately, standard approximate limits may have poor frequentist properties, even for quite large sample sizes. It has been known since Buehler (1957 Buehler, R. J. (1957). Confidence intervals for the product of two binomial parameters. Journal of Computational and Graphical Statistics 52:482493. [Google Scholar]) that an imperfect confidence limit can be adjusted to have exact coverage. These “tight” limits are the gold standard frequentist confidence limit. Computing tight limits requires exact calculation of certain tail probabilities and optimisation of potentially erratic functions of the nuisance parameter. Naive implementation is both computationally unreliable and highly burdensome, and perhaps explains why they are not in common use. For clinical trials however, where the data and parameter have dimension two, the difficulties can be fully surmounted. This paper brings together several results in the area and applies them to simple two dimensional problems. It is shown how to reduce the computational burden by an order of magnitude. Difficulties with the optimisation reliability are mitigated by applying two different computational strategies, which tend to break down under different conditions, and taking the less stringent of the two computed limits. This paper specifically develops limits for the relative risk in a clinical trial, but it should be clear to the reader that the method extends to arbitrary measures of treatment effect without essential modification.  相似文献   

18.
We propose the L1 distance between the distribution of a binned data sample and a probability distribution from which it is hypothetically drawn as a statistic for testing agreement between the data and a model. We study the distribution of this distance for N-element samples drawn from k bins of equal probability and derive asymptotic formulae for the mean and dispersion of L1 in the large-N limit. We argue that the L1 distance is asymptotically normally distributed, with the mean and dispersion being accurately reproduced by asymptotic formulae even for moderately large values of N and k.  相似文献   

19.
A general rate estimation method based on the in‐sample evolution of appropriately chosen diverging/converging statistics has recently been proposed by D.N. Politis [C. R. Acad. Sci. Paris, Ser. I, vol. 335, pp. 279–282, 2002] and T. McElroy & D.N. Politis [Ann. Statist., vol. 35, pp. 1827–1848, 2007]. In this paper, we show how a modification of the original estimators achieves a competitive rate of convergence. The modified estimators require the choice of a tuning parameter; an optimal such choice is generally a non‐trivial problem in practice. Some discussion to that effect is given, as well as a small simulation study in a heavy‐tailed setting.  相似文献   

20.
This paper considers a linear regression model with regression parameter vector β. The parameter of interest is θ= aTβ where a is specified. When, as a first step, a data‐based variable selection (e.g. minimum Akaike information criterion) is used to select a model, it is common statistical practice to then carry out inference about θ, using the same data, based on the (false) assumption that the selected model had been provided a priori. The paper considers a confidence interval for θ with nominal coverage 1 ‐ α constructed on this (false) assumption, and calls this the naive 1 ‐ α confidence interval. The minimum coverage probability of this confidence interval can be calculated for simple variable selection procedures involving only a single variable. However, the kinds of variable selection procedures used in practice are typically much more complicated. For the real‐life data presented in this paper, there are 20 variables each of which is to be either included or not, leading to 220 different models. The coverage probability at any given value of the parameters provides an upper bound on the minimum coverage probability of the naive confidence interval. This paper derives a new Monte Carlo simulation estimator of the coverage probability, which uses conditioning for variance reduction. For these real‐life data, the gain in efficiency of this Monte Carlo simulation due to conditioning ranged from 2 to 6. The paper also presents a simple one‐dimensional search strategy for parameter values at which the coverage probability is relatively small. For these real‐life data, this search leads to parameter values for which the coverage probability of the naive 0.95 confidence interval is 0.79 for variable selection using the Akaike information criterion and 0.70 for variable selection using Bayes information criterion, showing that these confidence intervals are completely inadequate.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号