首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The estimation of extreme conditional quantiles is an important issue in different scientific disciplines. Up to now, the extreme value literature focused mainly on estimation procedures based on independent and identically distributed samples. Our contribution is a two-step procedure for estimating extreme conditional quantiles. In a first step nonextreme conditional quantiles are estimated nonparametrically using a local version of [Koenker, R. and Bassett, G. (1978). Regression quantiles. Econometrica, 46, 33–50.] regression quantile methodology. Next, these nonparametric quantile estimates are used as analogues of univariate order statistics in procedures for extreme quantile estimation. The performance of the method is evaluated for both heavy tailed distributions and distributions with a finite right endpoint using a small sample simulation study. A bootstrap procedure is developed to guide in the selection of an optimal local bandwidth. Finally the procedure is illustrated in two case studies.  相似文献   

2.
Abstract

In survival or reliability data analysis, it is often useful to estimate the quantiles of the lifetime distribution, such as the median time to failure. Different nonparametric methods can construct confidence intervals for the quantiles of the lifetime distributions, some of which are implemented in commonly used statistical software packages. We here investigate the performance of different interval estimation procedures under a variety of settings with different censoring schemes. Our main objectives in this paper are to (i) evaluate the performance of confidence intervals based on the transformation approach commonly used in statistical software, (ii) introduce a new density-estimation-based approach to obtain confidence intervals for survival quantiles, and (iii) compare it with the transformation approach. We provide a comprehensive comparative study and offer some useful practical recommendations based on our results. Some numerical examples are presented to illustrate the methodologies developed.  相似文献   

3.
Abstract

In multivariate extreme value theory (MEVT), the focus is on analysis outside of the observable sampling zone, which implies that the region of interest is associated to high risk levels. This work provides tools to include directional notions into the MEVT, giving the opportunity to characterize the recently introduced directional multivariate quantiles (DMQ) at high levels. Then, an out-sample estimation method for these quantiles is given. A bootstrap procedure carries out the estimation of the tuning parameter in this multivariate framework and helps with the estimation of the DMQ. Asymptotic normality for the proposed estimator is provided and the methodology is illustrated with simulated data-sets. Finally, a real-life application to a financial case is also performed.  相似文献   

4.
ABSTRACT

This article considers the estimation of a distribution function FX(x) based on a random sample X1, X2, …, Xn when the sample is suspected to come from a close-by distribution F0(x). The new estimators, namely the preliminary test (PTE) and Stein-type estimator (SE) are defined and compared with the “empirical distribution function” (edf) under local departure. In this case, we show that Stein-type estimators are superior to edf and PTE is superior to edf when it is close to F0(x). As a by-product similar estimators are proposed for population quantiles.  相似文献   

5.
The non-central gamma distribution can be regarded as a general form of non-central χ2 distributions whose computations were thoroughly investigated (Ruben, H., 1974, Non-central chi-square and gamma revisited. Communications in Statistics, 3(7), 607–633; Knüsel, L., 1986, Computation of the chi-square and Poisson distribution. SIAM Journal on Scientific and Statistical Computing, 7, 1022–1036; Voit, E.O. and Rust, P.F., 1987, Noncentral chi-square distributions computed by S-system differential equations. Proceedings of the Statistical Computing Section, ASA, pp. 118–121; Rust, P.F. and Voit, E.O., 1990, Statistical densities, cumulatives, quantiles, and power obtained by S-systems differential equations. Journal of the American Statistical Association, 85, 572–578; Chattamvelli, R., 1994, Another derivation of two algorithms for the noncentral χ2 and F distributions. Journal of Statistical Computation and Simulation, 49, 207–214; Johnson, N.J., Kotz, S. and Balakrishnan, N., 1995, Continuous Univariate Distributions, Vol. 2 (2nd edn) (New York: Wiley). Both distributional function forms are usually in terms of weighted infinite series of the central one. The ad hoc approximations to cumulative probabilities of non-central gamma were extended or discussed by Chattamvelli, Knüsel and Bablok (Knüsel, L. and Bablok, B., 1996, Computation of the noncentral gamma distribution. SIAM Journal on Scientific Computing, 17, 1224–1231), and Ruben (Ruben, H., 1974, Non-central chi-square and gamma revisited. Communications in Statistics, 3(7), 607–633). However, they did not implement and demonstrate proposed numerical procedures. Approximations to non-central densities and quantiles are not available. In addition, its S-system formulation has not been derived. Here, approximations to cumulative probabilities, density, and quantiles based on the method of Knüsel and Bablok are derived and implemented in R codes. Furthermore, two alternate S-system forms are recast on the basis of techniques of Savageau and Voit (Savageau, M.A. and Voit, E.O., 1987, Recasting nonlinear differential equations as S-systems: A canonical nonlinear form. Mathematical Biosciences, 87, 83–115) as well as Chen (Chen, Z.-Y., 2003, Computing the distribution of the squared sample multiple correlation coefficient with S-Systems. Communications in Statistics—Simulation and Computation, 32(3), 873–898.) and Chen and Chou (Chen, Z.-Y. and Chou, Y.-C., 2000, Computing the noncentral beta distribution with S-system. Computational Statistics and Data Analysis, 33, 343–360.). Statistical densities, cumulative probabilities, quantiles can be evaluated by only one numerical solver power low analysis and simulation (PLAS). With the newly derived S-systems of non-central gamma, the specialized non-central χ2 distributions are demonstrated under five cases in the same three situations studied by Rust and Voit. Both numerical values in pairs are almost equal. Based on these, nine cases in three similar situations are designed for demonstration and evaluation. In addition, exact values in finite significant digits are provided for comparison. Demonstrations are conducted by R package and PLAS solver in the same PC system. By doing these, very accurate and consistent numerical results are obtained by three methods in two groups. On the other hand, these three methods are performed competitively with respect to speed of computation. Numerical advantages of S-systems over the ad hoc approximation and related properties are also discussed.  相似文献   

6.
Various exact tests for statistical inference are available for powerful and accurate decision rules provided that corresponding critical values are tabulated or evaluated via Monte Carlo methods. This article introduces a novel hybrid method for computing p‐values of exact tests by combining Monte Carlo simulations and statistical tables generated a priori. To use the data from Monte Carlo generations and tabulated critical values jointly, we employ kernel density estimation within Bayesian‐type procedures. The p‐values are linked to the posterior means of quantiles. In this framework, we present relevant information from the Monte Carlo experiments via likelihood‐type functions, whereas tabulated critical values are used to reflect prior distributions. The local maximum likelihood technique is employed to compute functional forms of prior distributions from statistical tables. Empirical likelihood functions are proposed to replace parametric likelihood functions within the structure of the posterior mean calculations to provide a Bayesian‐type procedure with a distribution‐free set of assumptions. We derive the asymptotic properties of the proposed nonparametric posterior means of quantiles process. Using the theoretical propositions, we calculate the minimum number of needed Monte Carlo resamples for desired level of accuracy on the basis of distances between actual data characteristics (e.g. sample sizes) and characteristics of data used to present corresponding critical values in a table. The proposed approach makes practical applications of exact tests simple and rapid. Implementations of the proposed technique are easily carried out via the recently developed STATA and R statistical packages.  相似文献   

7.
The inverse Gaussian distribution provides a flexible model for analyzing positive, right-skewed data. The generalized variable test for equality of several inverse Gaussian means with unknown and arbitrary variances has satisfactory Type-I error rate when the number of samples (k) is small (Tian, 2006). However, the Type-I error rate tends to be inflated when k goes up. In this article, we propose a parametric bootstrap (PB) approach for this problem. Simulation results show that the proposed test performs very satisfactorily regardless of the number of samples and sample sizes. This method is illustrated by an example.  相似文献   

8.
ABSTRACT

We propose parametric inferences for quantile event times with adjustment for covariates on competing risks data. We develop parametric quantile inferences using parametric regression modeling of the cumulative incidence function from the cause-specific hazard and direct approaches. Maximum likelihood inferences are developed for estimation of the cumulative incidence function and quantiles. We develop the construction of parametric confidence intervals for quantiles. Simulation studies show that the proposed methods perform well. We illustrate the methods using early stage breast cancer data.  相似文献   

9.
Abstract

Several approximations of copulas have been proposed in the literature. By using empirical versions of checker-type copulas approximations, we propose non parametric estimators of the copula. Under some conditions, the proposed estimators are copulas and their main advantage is that they can be sampled from easily. One possible application is the estimation of quantiles of sums of dependent random variables from a small sample of the multivariate law and a full knowledge of the marginal laws. We show that estimations may be improved by including in an easy way in the approximated copula some additional information on the law of a sub-vector for example. Our approach is illustrated by numerical examples.  相似文献   

10.
ABSTRACT

This article considers the empirical Bayes estimation problem in the uniform distribution U(0, θ) with censored data. For the parameter θ, using the empirical Bayes (EB) approach, we propose an EB estimation of θ which possesses a rate of convergence can be arbitrarily close to O(n ?1/2) when the historical samples are randomly censored from the right, where n is the number of historical sample. A sample and some simulation results are also presented.  相似文献   

11.
ABSTRACT

A dual-record system (DRS) (equivalently two sample capture–recapture experiments) model, with time and behavioural response variation, has attracted much attention specifically in the domain of official statistics and epidemiology, as the assumption of list independence often fails. The relevant model suffers from parameter identifiability problem, and suitable Bayesian methodologies could be helpful. In this article, we formulate population size estimation in DRS as a missing data problem and two empirical Bayes approaches are proposed along with the discussion of an existing Bayes treatment. Some features and associated posterior convergence for these methods are mentioned. Investigation through an extensive simulation study finds that our proposed approaches compare favourably with the existing Bayes approach for this complex model depending upon the availability of directional nature of underlying behavioural response effect. A real-data example is given to illustrate these methods.  相似文献   

12.
A. Ferreira  ?  L. de Haan  L. Peng? 《Statistics》2013,47(5):401-434
One of the major aims of one-dimensional extreme-value theory is to estimate quantiles outside the sample or at the boundary of the sample. The underlying idea of any method to do this is to estimate a quantile well inside the sample but near the boundary and then to shift it somehow to the right place. The choice of this “anchor quantile” plays a major role in the accuracy of the method. We present a bootstrap method to achieve the optimal choice of sample fraction in the estimation of either high quantile or endpoint estimation which extends earlier results by Hall and Weissman (1997) in the case of high quantile estimation. We give detailed results for the estimators used by Dekkers et al. (1989). An alternative way of attacking problems like this one is given in a paper by Drees and Kaufmann (1998).  相似文献   

13.
Comparing the variances of several independent samples is a classic problem and many tests have been proposed in the literature. Conover et al. [Conover, W.J., Johnson, M.E. and Johnson, M.M., 1981, A comparative study of tests for homogeneity of variances with applications to the outer continental self bidding data. Technometrics, 23, 351–361.] and Shoemaker [Shoemaker, L.H., 1995, Tests for difference in dispersion based on quantiles. The American Statistician, 49 (2), 179–182.] find that the existing tests lack power for skewed sampling distributions. To address this problem, we studied the effect of an a priori symmetrization of the data on the performance of tests for homogeneity of variances. This article also updates the comprehensive comparative study of Conover et al.  相似文献   

14.
For J ? 2 independent groups, the article deals with testing the global hypothesis that all J groups have a common population median or identical quantiles, with an emphasis on the quartiles. Classic rank-based methods are sometimes suggested for comparing medians, but it is well known that under general conditions they do not adequately address this goal. Extant methods based on the usual sample median are unsatisfactory when there are tied values except for the special case J = 2. A variation of the percentile bootstrap used in conjunction with the Harrell–Davis quantile estimator performs well in simulations. The method is illustrated with data from the Well Elderly 2 study.  相似文献   

15.
Importance sampling and control variates have been used as variance reduction techniques for estimating bootstrap tail quantiles and moments, respectively. We adapt each method to apply to both quantiles and moments, and combine the methods to obtain variance reductions by factors from 4 to 30 in simulation examples.We use two innovations in control variates—interpreting control variates as a re-weighting method, and the implementation of control variates using the saddlepoint; the combination requires only the linear saddlepoint but applies to general statistics, and produces estimates with accuracy of order n -1/2 B -1, where n is the sample size and B is the bootstrap sample size.We discuss two modifications to classical importance sampling—a weighted average estimate and a mixture design distribution. These modifications make importance sampling robust and allow moments to be estimated from the same bootstrap simulation used to estimate quantiles.  相似文献   

16.
Abstract

In some applications, the available data suffer from several sampling problems related to loss of information. This typically happens in Survival Analysis, where models for truncation, censorship, and biasing have been proposed and widely investigated. In this work, we analyze by simulations the (finite sample) bias and variance of the nonparametric MLE under length-biasing and right-censorship, recently introduced by de Uńa-Álvarez [de Uńa-Álvarez, J. (2002a). Product-limit estimation for length-biased censored data. Test 11:109–125]. Comparison with the time-honoured Kaplan–Meier estimate for censored data is included.  相似文献   

17.
Composite quantile regression models have been shown to be effective techniques in improving the prediction accuracy [H. Zou and M. Yuan, Composite quantile regression and the oracle model selection theory, Ann. Statist. 36 (2008), pp. 1108–1126; J. Bradic, J. Fan, and W. Wang, Penalized composite quasi-likelihood for ultrahighdimensional variable selection, J. R. Stat. Soc. Ser. B 73 (2011), pp. 325–349; Z. Zhao and Z. Xiao, Efficient regressions via optimally combining quantile information, Econometric Theory 30(06) (2014), pp. 1272–1314]. This paper studies composite Tobit quantile regression (TQReg) from a Bayesian perspective. A simple and efficient MCMC-based computation method is derived for posterior inference using a mixture of an exponential and a scaled normal distribution of the skewed Laplace distribution. The approach is illustrated via simulation studies and a real data set. Results show that combine information across different quantiles can provide a useful method in efficient statistical estimation. This is the first work to discuss composite TQReg from a Bayesian perspective.  相似文献   

18.
For estimation of time-varying coefficient longitudinal models, the widely used local least-squares (LS) or covariance-weighted local LS smoothing uses information from the local sample average. Motivated by the fact that a combination of multiple quantiles provides a more complete picture of the distribution, we investigate quantile regression-based methods to improve efficiency by optimally combining information across quantiles. Under the working independence scenario, the asymptotic variance of the proposed estimator approaches the Cramér–Rao lower bound. In the presence of dependence among within-subject measurements, we adopt a prewhitening technique to transform regression errors into independent innovations and show that the prewhitened optimally weighted quantile average estimator asymptotically achieves the Cramér–Rao bound for the independent innovations. Fully data-driven bandwidth selection and optimal weights estimation are implemented through a two-step procedure. Monte Carlo studies show that the proposed method delivers more robust and superior overall performance than that of the existing methods.  相似文献   

19.
For noninformative nonparametric estimation of finite population quantiles under simple random sampling, estimation based on the Polya posterior is similar to estimation based on the Bayesian approach developed by Ericson (J. Roy. Statist. Soc. Ser. B 31 (1969) 195) in that the Polya posterior distribution is the limit of Ericson's posterior distributions as the weight placed on the prior distribution diminishes. Furthermore, Polya posterior quantile estimates can be shown to be admissible under certain conditions. We demonstrate the admissibility of the sample median as an estimate of the population median under such a set of conditions. As with Ericson's Bayesian approach, Polya posterior-based interval estimates for population quantiles are asymptotically equivalent to the interval estimates obtained from standard frequentist approaches. In addition, for small to moderate sized populations, Polya posterior-based interval estimates for quantiles of a continuous characteristic of interest tend to agree with the standard frequentist interval estimates.  相似文献   

20.
The demand for reliable statistics in subpopulations, when only reduced sample sizes are available, has promoted the development of small area estimation methods. In particular, an approach that is now widely used is based on the seminal work by Battese et al. [An error-components model for prediction of county crop areas using survey and satellite data, J. Am. Statist. Assoc. 83 (1988), pp. 28–36] that uses linear mixed models (MM). We investigate alternatives when a linear MM does not hold because, on one side, linearity may not be assumed and/or, on the other, normality of the random effects may not be assumed. In particular, Opsomer et al. [Nonparametric small area estimation using penalized spline regression, J. R. Statist. Soc. Ser. B 70 (2008), pp. 265–283] propose an estimator that extends the linear MM approach to the case in which a linear relationship may not be assumed using penalized splines regression. From a very different perspective, Chambers and Tzavidis [M-quantile models for small area estimation, Biometrika 93 (2006), pp. 255–268] have recently proposed an approach for small-area estimation that is based on M-quantile (MQ) regression. This allows for models robust to outliers and to distributional assumptions on the errors and the area effects. However, when the functional form of the relationship between the qth MQ and the covariates is not linear, it can lead to biased estimates of the small area parameters. Pratesi et al. [Semiparametric M-quantile regression for estimating the proportion of acidic lakes in 8-digit HUCs of the Northeastern US, Environmetrics 19(7) (2008), pp. 687–701] apply an extended version of this approach for the estimation of the small area distribution function using a non-parametric specification of the conditional MQ of the response variable given the covariates [M. Pratesi, M.G. Ranalli, and N. Salvati, Nonparametric m-quantile regression using penalized splines, J. Nonparametric Stat. 21 (2009), pp. 287–304]. We will derive the small area estimator of the mean under this model, together with its mean-squared error estimator and compare its performance to the other estimators via simulations on both real and simulated data.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号