首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Wild Bootstrapping in Finite Populations with Auxiliary Information   总被引:1,自引:0,他引:1  
Consider a finite population u , which can be viewed as a realization of a super-population model. A simple ratio model (linear regression, without intercept) with heteroscedastic errors is supposed to have generated u . A random sample is drawn without replacement from u . In this set-up a two-stage wild bootstrap resampling scheme as well as several other useful forms of bootstrapping in finite populations will be considered. Some asymptotic results for various bootstrap approximations for normalized and Studentized versions of the well-known ratio and regression estimator are given. Bootstrap based confidence interval s for the population total and for the regression parameter of the underlying ratio model are also discussed  相似文献   

2.
Stute (1993, Consistent estimation under random censorship when covariables are present. Journal of Multivariate Analysis 45, 89–103) proposed a new method to estimate regression models with a censored response variable using least squares and showed the consistency and asymptotic normality for his estimator. This article proposes a new bootstrap-based methodology that improves the performance of the asymptotic interval estimation for the small sample size case. Therefore, we compare the behavior of Stute's asymptotic confidence interval with that of several confidence intervals that are based on resampling bootstrap techniques. In order to build these confidence intervals, we propose a new bootstrap resampling method that has been adapted for the case of censored regression models. We use simulations to study the improvement the performance of the proposed bootstrap-based confidence intervals show when compared to the asymptotic proposal. Simulation results indicate that, for the new proposals, coverage percentages are closer to the nominal values and, in addition, intervals are narrower.  相似文献   

3.
In this article, we propose a resampling method based on perturbing the estimating functions to compute the asymptotic variances of quantile regression estimators under missing at random condition. We prove that the conditional distributions of the resampling estimators are asymptotically equivalent to the distributions of quantile regression estimators. Our method can deal with complex situations, where the response and part of covariates are missing. Numerical results based on simulated and real data are provided under several designs.  相似文献   

4.
Cramér–von Mises type goodness of fit tests for interval censored data case 2 are proposed based on a resampling method called the leveraged bootstrap, and their asymptotic consistency is shown. The proposed tests are computationally efficient, and in fact can be applied to other types of censored data, including right censored data, doubly censored data and (mixture of) case k interval censored data. Some simulation results and an example from AIDS research are presented.  相似文献   

5.
A new resampling technique, referred as “local grid bootstrap” (LGB), based on nonparametric local bootstrap and applicable to a wide range of stationary general space Markov processes is proposed. This nonparametric technique resamples local neighborhoods defined around the true samples of the observed multivariate time serie. The asymptotic behavior of this resampling procedure is studied in detail. Applications to linear and nonlinear (in particular chaotic) simulated time series are presented, and compared to Paparoditis and Politis [2002. J. Statist. Plan. Inf. 108, 301–328] approach, referred as “local bootstrap” (LB) and developed in earlier similar works. The method shows to be efficient and robust even when the length of the observed time series is reasonably small.  相似文献   

6.
Importance resampling is an approach that uses exponential tilting to reduce the resampling necessary for the construction of nonparametric bootstrap confidence intervals. The properties of bootstrap importance confidence intervals are well established when the data is a smooth function of means and when there is no censoring. However, in the framework of survival or time-to-event data, the asymptotic properties of importance resampling have not been rigorously studied, mainly because of the unduly complicated theory incurred when data is censored. This paper uses extensive simulation to show that, for parameter estimates arising from fitting Cox proportional hazards models, importance bootstrap confidence intervals can be constructed if the importance resampling probabilities of the records for the n individuals in the study are determined by the empirical influence function for the parameter of interest. Our results show that, compared to uniform resampling, importance resampling improves the relative mean-squared-error (MSE) efficiency by a factor of nine (for n = 200). The efficiency increases significantly with sample size, is mildly associated with the amount of censoring, but decreases slightly as the number of bootstrap resamples increases. The extra CPU time requirement for calculating importance resamples is negligible when compared to the large improvement in MSE efficiency. The method is illustrated through an application to data on chronic lymphocytic leukemia, which highlights that the bootstrap confidence interval is the preferred alternative to large sample inferences when the distribution of a specific covariate deviates from normality. Our results imply that, because of its computational efficiency, importance resampling is recommended whenever bootstrap methodology is implemented in a survival framework. Its use is particularly important when complex covariates are involved or the survival problem to be solved is part of a larger problem; for instance, when determining confidence bounds for models linking survival time with clusters identified in gene expression microarray data.  相似文献   

7.
A composite endpoint consists of multiple endpoints combined in one outcome. It is frequently used as the primary endpoint in randomized clinical trials. There are two main disadvantages associated with the use of composite endpoints: a) in conventional analyses, all components are treated equally important; and b) in time‐to‐event analyses, the first event considered may not be the most important component. Recently Pocock et al. (2012) introduced the win ratio method to address these disadvantages. This method has two alternative approaches: the matched pair approach and the unmatched pair approach. In the unmatched pair approach, the confidence interval is constructed based on bootstrap resampling, and the hypothesis testing is based on the non‐parametric method by Finkelstein and Schoenfeld (1999). Luo et al. (2015) developed a close‐form variance estimator of the win ratio for the unmatched pair approach, based on a composite endpoint with two components and a specific algorithm determining winners, losers and ties. We extend the unmatched pair approach to provide a generalized analytical solution to both hypothesis testing and confidence interval construction for the win ratio, based on its logarithmic asymptotic distribution. This asymptotic distribution is derived via U‐statistics following Wei and Johnson (1985). We perform simulations assessing the confidence intervals constructed based on our approach versus those per the bootstrap resampling and per Luo et al. We have also applied our approach to a liver transplant Phase III study. This application and the simulation studies show that the win ratio can be a better statistical measure than the odds ratio when the importance order among components matters; and the method per our approach and that by Luo et al., although derived based on large sample theory, are not limited to a large sample, but are also good for relatively small sample sizes. Different from Pocock et al. and Luo et al., our approach is a generalized analytical method, which is valid for any algorithm determining winners, losers and ties. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

8.
In this note we propose a new and novel kernel density estimator for directly estimating the probability and cumulative distribution function of an L-estimate from a single population based on utilizing the theory in Knight (1985) in conjunction with classic inversion theory. This idea is further developed for a kernel density estimator for the difference of L-estimates from two independent populations. The methodology is developed via a “plug-in” approach, but it is distinct from the classic bootstrap methodology in that it is analytically and computationally feasible to provide an exact estimate of the distribution function and thus eliminates the resampling related error. The asymptotic and finite sample properties of our estimators are examined. The procedure is illustrated via generating the kernel density estimate for the Tukey's trimean from a small data set.  相似文献   

9.
In the regression model with censored data, it is not straightforward to estimate the covariances of the regression estimators, since their asymptotic covariances may involve the unknown error density function and its derivative. In this article, a resampling method for making inferences on the parameter, based on some estimating functions, is discussed for the censored regression model. The inference procedures are associated with a weight function. To find the best weight functions for the proposed procedures, extensive simulations are performed. The validity of the approximation to the distribution of the estimator by a resampling technique is also examined visually. Implementation of the procedures is discussed and illustrated in a real data example.  相似文献   

10.

We address the testing problem of proportional hazards in the two-sample survival setting allowing right censoring, i.e., we check whether the famous Cox model is underlying. Although there are many test proposals for this problem, only a few papers suggest how to improve the performance for small sample sizes. In this paper, we do exactly this by carrying out our test as a permutation as well as a wild bootstrap test. The asymptotic properties of our test, namely asymptotic exactness under the null and consistency, can be transferred to both resampling versions. Various simulations for small sample sizes reveal an actual improvement of the empirical size and a reasonable power performance when using the resampling versions. Moreover, the resampling tests perform better than the existing tests of Gill and Schumacher and Grambsch and Therneau . The tests’ practical applicability is illustrated by discussing real data examples.

  相似文献   

11.
If the power spectral density of a continuous time stationary stochastic process is not limited to a finite bandwidth, data sampled from that process at any uniform sampling rate leads to biased and inconsistent spectrum estimators, which are unsuitable for constructing confidence intervals. In this paper, we use the smoothed periodogram estimator to construct asymptotic confidence intervals shrinking to the true spectra, by allowing the sampling rate to go to infinity suitably fast as the sample size goes to infinity. The proposed method requires minimal computation, as it does not involve bootstrap or other resampling. The method is illustrated through a Monte-Carlo simulation study, and its performance is compared with that of the corresponding method based on uniform sampling at a fixed rate.  相似文献   

12.
Summary This paper deals with nonparametric methods for combining dependent permutation or randomization tests. Particularly, they are nonparametric with respect to the underlying dependence structure. The methods are based on a without replacement resampling procedure (WRRP) conditional on the observed data, also called conditional simulation, which provide suitable estimates, as good as computing time permits, of the permutational distribution of any statistic. A class C of combining functions is characterized in such a way that all its members, under suitable and reasonable conditions, are found to be consistent and unbiased. Moreover, for some of its members, their almost sure asymptotic equivalence with respect to best tests, in particular cases, is shown. An applicational example to a multivariate permutationalt-paired test is also discussed.  相似文献   

13.
ABSTRACT

This article presents a new test for unit roots based on least absolute deviation estimation specially designed to work for time series with autoregressive errors. The methodology used is a bootstrap scheme based on estimating a model and then the innovations. The resampling part is performed under the null hypothesis and, as it is customary in bootstrap procedures, is automatic and does not rely on the calculation of any nuisance parameter. The validity of the procedure is established and the asymptotic distribution of the statistic proposed is proved to converge to the correct distribution. To analyze the performance of the test for finite samples, a Monte Carlo study is conducted showing a very good behavior in many different situations.  相似文献   

14.
In many applications, the parameters of interest are estimated by solving non‐smooth estimating functions with U‐statistic structure. Because the asymptotic covariances matrix of the estimator generally involves the underlying density function, resampling methods are often used to bypass the difficulty of non‐parametric density estimation. Despite its simplicity, the resultant‐covariance matrix estimator depends on the nature of resampling, and the method can be time‐consuming when the number of replications is large. Furthermore, the inferences are based on the normal approximation that may not be accurate for practical sample sizes. In this paper, we propose a jackknife empirical likelihood‐based inferential procedure for non‐smooth estimating functions. Standard chi‐square distributions are used to calculate the p‐value and to construct confidence intervals. Extensive simulation studies and two real examples are provided to illustrate its practical utilities.  相似文献   

15.
《Econometric Reviews》2008,27(1):139-162
The quality of the asymptotic normality of realized volatility can be poor if sampling does not occur at very high frequencies. In this article we consider an alternative approximation to the finite sample distribution of realized volatility based on Edgeworth expansions. In particular, we show how confidence intervals for integrated volatility can be constructed using these Edgeworth expansions. The Monte Carlo study we conduct shows that the intervals based on the Edgeworth corrections have improved properties relatively to the conventional intervals based on the normal approximation. Contrary to the bootstrap, the Edgeworth approach is an analytical approach that is easily implemented, without requiring any resampling of one's data. A comparison between the bootstrap and the Edgeworth expansion shows that the bootstrap outperforms the Edgeworth corrected intervals. Thus, if we are willing to incur in the additional computational cost involved in computing bootstrap intervals, these are preferred over the Edgeworth intervals. Nevertheless, if we are not willing to incur in this additional cost, our results suggest that Edgeworth corrected intervals should replace the conventional intervals based on the first order normal approximation.  相似文献   

16.
Accelerated life-testing (ALT) is a very useful technique for examining the reliability of highly reliable products. It allows the experimenter to obtain failure data more quickly at increased stress levels than under normal operating conditions. A step-stress model is one special class of ALT, and in this article we consider a simple step-stress model under the cumulative exposure model with lognormally distributed lifetimes in the presence of Type-I censoring. We then discuss inferential methods for the unknown parameters of the model by the maximum likelihood estimation method. Some numerical methods, such as the Newton–Raphson and quasi-Newton methods, are discussed for solving the corresponding non-linear likelihood equations. Next, we discuss the construction of confidence intervals for the unknown parameters based on (i) the asymptotic normality of the maximum likelihood estimators (MLEs), and (ii) parametric bootstrap resampling technique. A Monte Carlo simulation study is carried out to examine the performance of these methods of inference. Finally, a numerical example is presented in order to illustrate all the methods of inference developed here.  相似文献   

17.
The number of variables in a regression model is often too large and a more parsimonious model may be preferred. Selection strategies (e.g. all-subset selection with various penalties for model complexity, or stepwise procedures) are widely used, but there are few analytical results about their properties. The problems of replication stability, model complexity, selection bias and an over-optimistic estimate of the predictive value of a model are discussed together with several proposals based on resampling methods. The methods are applied to data from a case–control study on atopic dermatitis and a clinical trial to compare two chemotherapy regimes by using a logistic regression and a Cox model. A recent proposal to use shrinkage factors to reduce the bias of parameter estimates caused by model building is extended to parameterwise shrinkage factors and is discussed as a further possibility to illustrate problems of models which are too complex. The results from the resampling approaches favour greater simplicity of the final regression model.  相似文献   

18.
Abstract. Many statistical models arising in applications contain non‐ and weakly‐identified parameters. Due to identifiability concerns, tests concerning the parameters of interest may not be able to use conventional theories and it may not be clear how to assess statistical significance. This paper extends the literature by developing a testing procedure that can be used to evaluate hypotheses under non‐ and weakly‐identifiable semiparametric models. The test statistic is constructed from a general estimating function of a finite dimensional parameter model representing the population characteristics of interest, but other characteristics which may be described by infinite dimensional parameters, and viewed as nuisance, are left completely unspecified. We derive the limiting distribution of this statistic and propose theoretically justified resampling approaches to approximate its asymptotic distribution. The methodology's practical utility is illustrated in simulations and an analysis of quality‐of‐life outcomes from a longitudinal study on breast cancer.  相似文献   

19.
Abstract

Using a model-assisted approach, this paper studies asymptotically design-unbiased (ADU) estimation of a population “distribution function” and extends to deriving an asymptotic and approximate unbiased estimator for a population quantile from a sample chosen with varying probabilities. The respective asymptotic standard errors and confidence intervals are then worked out. Numerical findings based on an actual data support the theory with efficient results.  相似文献   

20.
The quality of the asymptotic normality of realized volatility can be poor if sampling does not occur at very high frequencies. In this article we consider an alternative approximation to the finite sample distribution of realized volatility based on Edgeworth expansions. In particular, we show how confidence intervals for integrated volatility can be constructed using these Edgeworth expansions. The Monte Carlo study we conduct shows that the intervals based on the Edgeworth corrections have improved properties relatively to the conventional intervals based on the normal approximation. Contrary to the bootstrap, the Edgeworth approach is an analytical approach that is easily implemented, without requiring any resampling of one's data. A comparison between the bootstrap and the Edgeworth expansion shows that the bootstrap outperforms the Edgeworth corrected intervals. Thus, if we are willing to incur in the additional computational cost involved in computing bootstrap intervals, these are preferred over the Edgeworth intervals. Nevertheless, if we are not willing to incur in this additional cost, our results suggest that Edgeworth corrected intervals should replace the conventional intervals based on the first order normal approximation.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号