首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
A. Ferreira  ?  L. de Haan  L. Peng? 《Statistics》2013,47(5):401-434
One of the major aims of one-dimensional extreme-value theory is to estimate quantiles outside the sample or at the boundary of the sample. The underlying idea of any method to do this is to estimate a quantile well inside the sample but near the boundary and then to shift it somehow to the right place. The choice of this “anchor quantile” plays a major role in the accuracy of the method. We present a bootstrap method to achieve the optimal choice of sample fraction in the estimation of either high quantile or endpoint estimation which extends earlier results by Hall and Weissman (1997) in the case of high quantile estimation. We give detailed results for the estimators used by Dekkers et al. (1989). An alternative way of attacking problems like this one is given in a paper by Drees and Kaufmann (1998).  相似文献   

2.
ABSTRACT

The last few years, the applications of Support Vector Machine (SVM) for solving classification and regression problems have been increasing, due to its high performance and ability to transform the non-linear relationships among variables to linear form by employing the kernel idea (kernel function). In this work, we develop a semi-parametric approach to fit single-index models to deal with high-dimensional problems. To achieve this goal, we use support vector regression (SVR) for estimating the unknown nonparametric link function, while the single-index is determined by using the semi-parametric least squares method (Ichimura 1993). This development enhances the ability of SVR to solve high-dimensional problem. We design a three simulation examples with high-dimensional problems (linear and nonlinear). The simulations demonstrate the superior performance of the proposed method versus the standard SVR method. This is further illustrated by applying the real data.  相似文献   

3.
In the last years, many articles have been written about Bayesian model selection. In this article, a different and easier method is proposed and analyzed. The key idea of this method is based on the well-known property that, under the true model, the cumulative distribution function is distributed as a uniform distribution over the interval (0, 1). The method is first introduced for the continuous case and then for the discrete case by smoothing the cumulative distribution function. Some asymptotical properties of the method are obtained by developing an alternative to Helly's theorems. Finally, the performance of the method is evaluated by simulation, showing a good behavior.  相似文献   

4.
Testing the reliability at a nominal stress level may lead to extensive test time. Estimations of reliability parameters can be obtained faster thanks to step-stress accelerated life tests (ALT). Usually, a transfer functional defined among a given class of parametric functions is required, but Bagdonavi?ius and Nikulin showed that ALT tests are still possible without any assumption about this functional. When shape and scale parameters of the lifetime distribution change with the stress level, they suggested an ALT method using a model called CHanging Shape and Scale (CHSS). They estimated the lifetime parameters at the nominal stress with maximum likelihood estimation (MLE). However, this method usually requires an initialization of lifetime parameters, which may be difficult when no similar product has been tested before. This paper aims to face this issue by using an iterating least square estimation (LSE) method. It will enable one to initialize the optimization required to carry out the MLE and it will give estimations that can sometimes be better than those given by MLE.  相似文献   

5.
Since its introduction by Owen (1988, 1990), the empirical likelihood method has been extensively investigated and widely used to construct confidence regions and to test hypotheses in the literature. For a large class of statistics that can be obtained via solving estimating equations, the empirical likelihood function can be formulated from these estimating equations as proposed by Qin and Lawless (1994). If only a small part of parameters is of interest, a profile empirical likelihood method has to be employed to construct confidence regions, which could be computationally costly. In this article the authors propose a jackknife empirical likelihood method to overcome this computational burden. This proposed method is easy to implement and works well in practice. The Canadian Journal of Statistics 39: 370–384; 2011 © 2011 Statistical Society of Canada  相似文献   

6.
In health technology assessment (HTA), beside network meta‐analysis (NMA), indirect comparisons (IC) have become an important tool used to provide evidence between two treatments when no head‐to‐head data are available. Researchers may use the adjusted indirect comparison based on the Bucher method (AIC) or the matching‐adjusted indirect comparison (MAIC). While the Bucher method may provide biased results when included trials differ in baseline characteristics that influence the treatment outcome (treatment effect modifier), this issue may be addressed by applying the MAIC method if individual patient data (IPD) for at least one part of the AIC is available. Here, IPD is reweighted to match baseline characteristics and/or treatment effect modifiers of published data. However, the MAIC method does not provide a solution for situations when several common comparators are available. In these situations, assuming that the indirect comparison via the different common comparators is homogeneous, we propose merging these results by using meta‐analysis methodology to provide a single, potentially more precise, treatment effect estimate. This paper introduces the method to combine several MAIC networks using classic meta‐analysis techniques, it discusses the advantages and limitations of this approach, as well as demonstrates a practical application to combine several (M)AIC networks using data from Phase III psoriasis randomized control trials (RCT).  相似文献   

7.
In this paper, we deal with the analysis of case series. The self-controlled case series method (SCCS) was developed to analyse the temporal association between time-varying exposure and an outcome event. We apply the SCCS method to the vaccination data of the German Examination Survey for Children and Adolescents (KiGGS). We illustrate that the standard SCCS method cannot be applied to terminal events such as death. In this situation, an extension of SCCS adjusted for terminal events gives unbiased point estimators. The key question of this paper is whether the general Cox regression model for time-dependent covariates may be an alternative to the adjusted SCCS method for terminal events. In contrast to the SCCS method, Cox regression is included in most software packages (SPSS, SAS, STATA, R, …) and it is easy to use. We can show that Cox regression is applicable to test the null hypothesis. In our KiGGS example without censored data, the Cox regression and the adjusted SCCS method yield point estimates almost identical to the standard SCCS method. We have conducted several simulation studies to complete the comparison of the two methods. The Cox regression shows a tendency to underestimate the true effect with prolonged risk periods and strong effects (Relative Incidence >2). If risk of the event is strongly affected by the age, the adjusted SCCS method slightly overestimates the predefined exposure effect. Cox regression has the same efficiency as the adjusted SCCS method in the simulation.  相似文献   

8.
Empirical Likelihood for Censored Linear Regression   总被引:5,自引:0,他引:5  
In this paper we investigate the empirical likelihood method in a linear regression model when the observations are subject to random censoring. An empirical likelihood ratio for the slope parameter vector is defined and it is shown that its limiting distribution is a weighted sum of independent chi-square distributions. This reduces to the empirical likelihood to the linear regression model first studied by Owen (1991) if there is no censoring present. Some simulation studies are presented to compare the empirical likelihood method with the normal approximation based method proposed in Lai et al. (1995). It was found that the empirical likelihood method performs much better than the normal approximation method.  相似文献   

9.
On the consistency of the maximum spacing method   总被引:1,自引:0,他引:1  
The main result of this paper is a consistency theorem for the maximum spacing method, a general method of estimating parameters in continuous univariate distributions, introduced by Cheng and Amin (J. Roy. Statist. Soc. Ser. A 45 (1983) 394–403) and independently by Ranneby (Scand. J. Statist. 11 (1984) 93–112). This main result generalizes a theorem of Ranneby (Scand. J. Statist. 11 (1984) 93–112). Also, some examples are given, which shows that this estimation method works also in cases where the maximum likelihood method breaks down.  相似文献   

10.
The kernel function method developed by Yamato (1971) to estimate a probability density function essentially is a way of smoothing the empirical distribution function. This paper shows how one can generalize this method to estimate signals for a semimartingale model. A recursive convolution smoothed estimate is used to obtain an absolutely continuous estimate for an absolutely continuous signal of a semimartingale model. It is also shown that the estimator obtained has a smaller asymptotic variance than the one obtained in Thavaneswaran (1988).  相似文献   

11.
In this article, we estimate the parameters of exponential Pareto II distribution by two new methods. The first one is based on the principle of maximum entropy (POME) and the second is by Kullback–Leibler divergence of survival function (KLS). Monte Carlo simulated data are used to evaluate these methods and compare them with the maximum likelihood method. Finally, we fit this distribution to a set of real data by estimation procedures.  相似文献   

12.
Gomez and Lagakos (1994) propose a nonparametric method for estimating the distribution of a survival time when the origin and end points defining the survival time suffer interval-censoring and right-censoring, respectively. In some situations, the end point also suffers interval-censoring as well as truncation. In this paper, we consider this general situation and propose a two-step estimation procedure for the estimation of the distribution of a survival time based on doubly interval-censored and truncated data. The proposed method generalizes the methods proposed by DeGruttola and Lagakos (1989) and Sun (1995) and is more efficient than that given in Gomez and Lagakos (1994). The approach is based on self-consistency equations. The method is illustrated by an analysis of an AIDS cohort study.  相似文献   

13.
Ibrahim (1990) used the EM-algorithm to obtain maximum likelihood estimates of the regression parameters in generalized linear models with partially missing covariates. The technique was termed EM by the method of weights. In this paper, we generalize this technique to Cox regression analysis with missing values in the covariates. We specify a full model letting the unobserved covariate values be random and then maximize the observed likelihood. The asymptotic covariance matrix is estimated by the inverse information matrix. The missing data are allowed to be missing at random but also the non-ignorable non-response situation may in principle be considered. Simulation studies indicate that the proposed method is more efficient than the method suggested by Paik & Tsai (1997). We apply the procedure to a clinical trials example with six covariates with three of them having missing values.  相似文献   

14.
Quality Measurement Plan (QMP) as developed by Hoadley (1981) is a statistical method for analyzing discrete quality audit data which consist of the expected number of defects given the standard quality. The QMP is based on an empirical Bayes (EB) model of the audit sampling process. Despite its wide publicity, Hoadley's method has often been described as heuristic. In this paper we offer an hierarchical Bayes (HB) alternative to Hoadley's EB model, and overcome much of the criticism against this model. Gibbs sampling is used to implement the HB model proposed in this paper. Also, the convergence of the Gibbs sampler is monitored via the algorithm of Gelman and Rubin (1992).  相似文献   

15.
The basic idea of an interaction spline model was presented in Barry (1983). The general interaction spline models were proposed by Wahba (1986). The purely periodic spline model, a special case of the general interaction spline models, is considered in this paper. A stepwise approach using generalized cross validation (GCV) for fitting the model is proposed. Based on the nice orthogonality properties of the purely periodic functions, the stepwise approach is a promising method for the interaction spline model. The approach can also be generalized to the non-purely-periodic spline models. But this is no done here.  相似文献   

16.
When auxiliary information is available at the design stage, samples may be selected by means of balanced sampling. The variance of the Horvitz-Thompson estimator is then reduced, since it is approximately given by that of the residuals of the variable of interest on the balancing variables. In this paper, a method for computing optimal inclusion probabilities for balanced sampling on given auxiliary variables is studied. We show that the method formerly suggested by Tillé and Favre (2005) enables the computation of inclusion probabilities that lead to a decrease in variance under some conditions on the set of balancing variables. A disadvantage is that the target optimal inclusion probabilities depend on the variable of interest. If the needed quantities are unknown at the design stage, we propose to use estimates instead (e.g., arising from a previous wave of the survey). A limited simulation study suggests that, under some conditions, our method performs better than the method of Tillé and Favre (2005).  相似文献   

17.
一、引言在所有的能源类别中,石油一直是全球消费比例最高的能源。由于经济飞速发展,在全球能源形势日趋紧张的严峻对局中,中国对能源需求急剧增多,2004年中国石油进口超过日本成为全球仅次于美国的世界第二大石油进口国,石油对外依存度有增无减。据资料显示,2004年生产原油1.7  相似文献   

18.
In this paper we use non-parametric local polynomial methods to estimate the regression function, m ( x ). Y may be a binary or continuous response variable, and X is continuous with non-uniform density. The main contributions of this paper are the weak convergence of a bandwidth process for kernels of order (0, k ), k =2 j , j ≥1 and the proposal of a local data-driven bandwidth selection method which is particularly beneficial for the case when X is not distributed uniformly. This selection method minimizes estimates of the asymptotic MSE and estimates the bias portion in an innovative way which relies on the order of the kernel and not estimation of m 2( x ) directly. We show that utilization of this method results in the achievement of the optimal asymptotic MSE by the estimator, i.e. the method is efficient. Simulation studies are provided which illustrate the method for both binary and continuous response cases.  相似文献   

19.
The Monte Carlo method gives some estimators to evaluate the expectation [ILM0001] based on samples from either the true density f or from some instrumental density. In this paper, we show that the Riemann estimators introduced by Philippe (1997) can be improved by using the importance sampling method. This approach produces a class of Monte Carlo estimators such that the variance is of order O(n ?2). The choice of an optimal estimator among this class is discussed. Some simulations illustrate the improvement brought by this method. Moreover, we give a criterion to assess the convergence of our optimal estimator to the integral of interest.  相似文献   

20.
Unbiased estimators for restricted adaptive cluster sampling   总被引:2,自引:0,他引:2  
In adaptive cluster sampling the size of the final sample is random, thus creating design problems. To get round this, Brown (1994) and Brown & Manly (1998) proposed a modification of the method, placing a restriction on the size of the sample, and using standard but biased estimators for estimating the population mean. But in this paper a new unbiased estimator and an unbiased variance estimator are proposed, based on estimators proposed by Murthy (1957) and extended to sequential and adaptive sampling designs by Salehi & Seber (2001). The paper also considers a restricted version of the adaptive scheme of Salehi & Seber (1997a) in which the networks are selected without replacement, and obtains unbiased estimators. The method is demonstrated by a simple example. Using simulation from this example, the new estimators are shown to compare very favourably with the standard biased estimators.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号