首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 109 毫秒
1.
A double-bootstrap confidence interval must usually be approximated by a Monte Carlo simulation, consisting of two nested levels of bootstrap sampling. We provide an analysis of the coverage accuracy of the interval which takes account of both the inherent bootstrap and Monte Carlo errors. The analysis shows that, by a suitable choice of the number of resamples drawn at the inner level of bootstrap sampling, we can reduce the order of coverage error. We consider also the effects of performing a finite Monte Carlo simulation on the mean length and variability of length of two-sided intervals. An adaptive procedure is presented for the choice of the number of inner level resamples. The effectiveness of the procedure is illustrated through a small simulation study.  相似文献   

2.
Quasi-random sequences are known to give efficient numerical integration rules in many Bayesian statistical problems where the posterior distribution can be transformed into periodic functions on then-dimensional hypercube. From this idea we develop a quasi-random approach to the generation of resamples used for Monte Carlo approximations to bootstrap estimates of bias, variance and distribution functions. We demonstrate a major difference between quasi-random bootstrap resamples, which are generated by deterministic algorithms and have no true randomness, and the usual pseudo-random bootstrap resamples generated by the classical bootstrap approach. Various quasi-random approaches are considered and are shown via a simulation study to result in approximants that are competitive in terms of efficiency when compared with other bootstrap Monte Carlo procedures such as balanced and antithetic resampling.  相似文献   

3.
This paper proposes a sufficient bootstrap method, which uses only the unique observations in the resamples, to assess the individual bioequivalence under 2 × 4 randomized crossover design. The finite sample performance of the proposed method is illustrated by extensive Monte Carlo simulations as well as a real‐experimental data set, and the results are compared with those obtained by the traditional bootstrap technique. Our records reveal that the proposed method is a good competitor or even better than the classical percentile bootstrap confidence limits.  相似文献   

4.
Conventional procedures for Monte Carlo and bootstrap tests require that B, the number of simulations, satisfy a specific relationship with the level of the test. Otherwise, a test that would instead be exact will either overreject or underreject for finite B. We present expressions for the rejection frequencies associated with existing procedures and propose a new procedure that yields exact Monte Carlo tests for any positive value of B. This procedure, which can also be used for bootstrap tests, is likely to be most useful when simulation is expensive.  相似文献   

5.
ROC analysis involving two large datasets is an important method for analyzing statistics of interest for decision making of a classifier in many disciplines. And data dependency due to multiple use of the same subjects exists ubiquitously in order to generate more samples because of limited resources. Hence, a two-layer data structure is constructed and the nonparametric two-sample two-layer bootstrap is employed to estimate standard errors of statistics of interest derived from two sets of data, such as a weighted sum of two probabilities. In this article, to reduce the bootstrap variance and ensure the accuracy of computation, Monte Carlo studies of bootstrap variability were carried out to determine the appropriate number of bootstrap replications in ROC analysis with data dependency. It is suggested that with a tolerance 0.02 of the coefficient of variation, 2,000 bootstrap replications be appropriate under such circumstances.  相似文献   

6.
The primary purpose of this paper is that of developing a sequential Monte Carlo approximation to an ideal bootstrap estimate of the parameter of interest. Using the concept of fixed-precision approximation, we construct a sequential stopping rule for determining the number of bootstrap samples to be taken in order to achieve a specified precision of the Monte Carlo approximation. It is shown that the sequential Monte Carlo approximation is asymptotically efficient in the problems of estimation of the bias and standard error of a given statistic. Efficient bootstrap resampling is discussed and a numerical study is carried out for illustrating the obtained theoretical results.  相似文献   

7.
We consider in this article the problem of numerically approximating the quantiles of a sample statistic for a given population, a problem of interest in many applications, such as bootstrap confidence intervals. The proposed Monte Carlo method can be routinely applied to handle complex problems that lack analytical results. Furthermore, the method yields estimates of the quantiles of a sample statistic of any sample size though Monte Carlo simulations for only two optimally selected sample sizes are needed. An analysis of the Monte Carlo design is performed to obtain the optimal choices of these two sample sizes and the number of simulated samples required for each sample size. Theoretical results are presented for the bias and variance of the numerical method proposed. The results developed are illustrated via simulation studies for the classical problem of estimating a bivariate linear structural relationship. It is seen that the size of the simulated samples used in the Monte Carlo method does not have to be very large and the method provides a better approximation to quantiles than those based on an asymptotic normal theory for skewed sampling distributions.  相似文献   

8.
Various exact tests for statistical inference are available for powerful and accurate decision rules provided that corresponding critical values are tabulated or evaluated via Monte Carlo methods. This article introduces a novel hybrid method for computing p‐values of exact tests by combining Monte Carlo simulations and statistical tables generated a priori. To use the data from Monte Carlo generations and tabulated critical values jointly, we employ kernel density estimation within Bayesian‐type procedures. The p‐values are linked to the posterior means of quantiles. In this framework, we present relevant information from the Monte Carlo experiments via likelihood‐type functions, whereas tabulated critical values are used to reflect prior distributions. The local maximum likelihood technique is employed to compute functional forms of prior distributions from statistical tables. Empirical likelihood functions are proposed to replace parametric likelihood functions within the structure of the posterior mean calculations to provide a Bayesian‐type procedure with a distribution‐free set of assumptions. We derive the asymptotic properties of the proposed nonparametric posterior means of quantiles process. Using the theoretical propositions, we calculate the minimum number of needed Monte Carlo resamples for desired level of accuracy on the basis of distances between actual data characteristics (e.g. sample sizes) and characteristics of data used to present corresponding critical values in a table. The proposed approach makes practical applications of exact tests simple and rapid. Implementations of the proposed technique are easily carried out via the recently developed STATA and R statistical packages.  相似文献   

9.
The decorrelating property of the discrete wavelet transformation (DWT) appears valuable because one can avoid estimating the correlation structure in the original data space by bootstrap resampling of the DWT. Several authors have shown that the wavestrap approximately retains the correlation structure of observations. However, simply retaining the same correlation structure of original observations does not guarantee enough variation for regression parameter estimators. Our simulation studies show that these wavestraps yield undercoverage of parameters for a simple linear regression for time series data of the type that arise in functional MRI experiments. It is disappointing that the wavestrap does not even provide valid resamples for both white noise sequences and fractional Brownian noise sequences. Thus, the wavestrap method is not completely valid in obtaining resamples related to linear regression analysis and should be used with caution for hypothesis testing as well. The reasons for these undercoverages are also discussed. A parametric bootstrap resampling in the wavelet domain is introduced to offer insight into these previously undiscovered defects in wavestrapping.  相似文献   

10.
In many situations saddlepoint approximations can replace the Monte Carlo simulation typically used to find the bootstrap distribution of a statistic. We explain how bootstrap and permutation distributions can be expressed as conditional distributions and how methods for linear programming and for fitting generalized linear models can be used to find saddlepoint approximations to these distributions. The ideas are illustrated using an example from insurance.  相似文献   

11.
We present a bootstrap Monte Carlo algorithm for computing the power function of the generalized correlation coefficient. The proposed method makes no assumptions about the form of the underlying probability distribution and may be used with observed data to approximate the power function and pilot data for sample size determination. In particular, the bootstrap power functions of the Pearson product moment correlation and the Spearman rank correlation are examined. Monte Carlo experiments indicate that the proposed algorithm is reliable and compares well with the asymptotic values. An example which demonstrates how this method can be used for sample size determination and power calculations is provided.  相似文献   

12.
We consider the issue of performing accurate small sample inference in beta autoregressive moving average model, which is useful for modeling and forecasting continuous variables that assume values in the interval (0,?1). The inferences based on conditional maximum likelihood estimation have good asymptotic properties, but their performances in small samples may be poor. This way, we propose bootstrap bias corrections of the point estimators and different bootstrap strategies for confidence interval improvements. Our Monte Carlo simulations show that finite sample inference based on bootstrap corrections is much more reliable than the usual inferences. We also presented an empirical application.  相似文献   

13.
This article investigates the impact of multivariate generalized autoregressive conditional heteroskedastic (GARCH) errors on hypothesis testing for cointegrating vectors. The study reviews a cointegrated vector autoregressive model incorporating multivariate GARCH innovations and a regularity condition required for valid asymptotic inferences. Monte Carlo experiments are then conducted on a test statistic for a hypothesis on the cointegrating vectors. The experiments demonstrate that the regularity condition plays a critical role in rendering the hypothesis testing operational. It is also shown that Bartlett-type correction and wild bootstrap are useful in improving the small-sample size and power performance of the test statistic of interest.  相似文献   

14.
In this article, we propose a new test for examining the equality of the coefficient of variation between two different populations. The proposed test is based on the nonparametric bootstrap method. It appears to yield several appreciable advantages over the current tests. The quick and easy implementation of the test can be considered as advantages of the proposed test. The test is examined by the Monte Carlo simulations, and also evaluated using various numerical studies.  相似文献   

15.
Approximate normality and unbiasedness of the maximum likelihood estimate (MLE) of the long-memory parameter H of a fractional Brownian motion hold reasonably well for sample sizes as small as 20 if the mean and scale parameter are known. We show in a Monte Carlo study that if the latter two parameters are unknown the bias and variance of the MLE of H both increase substantially. We also show that the bias can be reduced by using a parametric bootstrap procedure. In very large samples, maximum likelihood estimation becomes problematic because of the large dimension of the covariance matrix that must be inverted. To overcome this difficulty, we propose a maximum likelihood method based upon first differences of the data. These first differences form a short-memory process. We split the data into a number of contiguous blocks consisting of a relatively small number of observations. Computation of the likelihood function in a block then presents no computational problem. We form a pseudo-likelihood function consisting of the product of the likelihood functions in each of the blocks and provide a formula for the standard error of the resulting estimator of H. This formula is shown in a Monte Carlo study to provide a good approximation to the true standard error. The computation time required to obtain the estimate and its standard error from large data sets is an order of magnitude less than that required to obtain the widely used Whittle estimator. Application of the methodology is illustrated on two data sets.  相似文献   

16.
In reliability theory, risk analysis, renewal processes and actuarial studies, the residual lifetimes data play an important essential role in studying the conditional tail of the lifetime data. In this paper, based on some observed ordered residual Weibull data, we introduce different prediction methods for obtaining prediction intervals (PIs) of future residual lifetimes including likelihood, Wald, moments, parametric bootstrap, and highest conditional methods. Monte Carlo simulations are performed to compare the performances of the so obtained PIs and one data analysis is performed for illustration purposes.  相似文献   

17.
Importance resampling is an approach that uses exponential tilting to reduce the resampling necessary for the construction of nonparametric bootstrap confidence intervals. The properties of bootstrap importance confidence intervals are well established when the data is a smooth function of means and when there is no censoring. However, in the framework of survival or time-to-event data, the asymptotic properties of importance resampling have not been rigorously studied, mainly because of the unduly complicated theory incurred when data is censored. This paper uses extensive simulation to show that, for parameter estimates arising from fitting Cox proportional hazards models, importance bootstrap confidence intervals can be constructed if the importance resampling probabilities of the records for the n individuals in the study are determined by the empirical influence function for the parameter of interest. Our results show that, compared to uniform resampling, importance resampling improves the relative mean-squared-error (MSE) efficiency by a factor of nine (for n = 200). The efficiency increases significantly with sample size, is mildly associated with the amount of censoring, but decreases slightly as the number of bootstrap resamples increases. The extra CPU time requirement for calculating importance resamples is negligible when compared to the large improvement in MSE efficiency. The method is illustrated through an application to data on chronic lymphocytic leukemia, which highlights that the bootstrap confidence interval is the preferred alternative to large sample inferences when the distribution of a specific covariate deviates from normality. Our results imply that, because of its computational efficiency, importance resampling is recommended whenever bootstrap methodology is implemented in a survival framework. Its use is particularly important when complex covariates are involved or the survival problem to be solved is part of a larger problem; for instance, when determining confidence bounds for models linking survival time with clusters identified in gene expression microarray data.  相似文献   

18.
This paper proposes and investigates a class of Markov Poisson regression models in which Poisson rate functions of covariates are conditional on unobserved states which follow a finite-state Markov chain. Features of the proposed model, estimation, inference, bootstrap confidence intervals, model selection and other implementation issues are discussed. Monte Carlo studies suggest that the proposed estimation method is accurate and reliable for single- and multiple-subject time series data; the choice of starting probabilities for the Markov process has little eff ect on the parameter estimates; and penalized likelihood criteria are reliable for determining the number of states. Part 2 provides applications of the proposed model.  相似文献   

19.
Statistical experiments, more commonly referred to as Monte Carlo or simulation studies, are used to study the behavior of statistical methods and measures under controlled situations. Whereas recent computing and methodological advances have permitted increased efficiency in the simulation process, known as variance reduction, such experiments remain limited by their finite nature and hence are subject to uncertainty; when a simulation is run more than once, different results are obtained. However, virtually no emphasis has been placed on reporting the uncertainty, referred to here as Monte Carlo error, associated with simulation results in the published literature, or on justifying the number of replications used. These deserve broader consideration. Here we present a series of simple and practical methods for estimating Monte Carlo error as well as determining the number of replications required to achieve a desired level of accuracy. The issues and methods are demonstrated with two simple examples, one evaluating operating characteristics of the maximum likelihood estimator for the parameters in logistic regression and the other in the context of using the bootstrap to obtain 95% confidence intervals. The results suggest that in many settings, Monte Carlo error may be more substantial than traditionally thought.  相似文献   

20.
This article proposes a joint test for conditional heteroscedasticity in dynamic panel data models. The test is constructed by checking the joint significance of estimates of second to pth-order serial correlation in the squares sequence of the first differenced errors. To avoid any distribution assumptions of the errors and the effects, we adopt the GMM estimation for the parameter coefficient and higher order moment estimation for the errors. Based on the estimations, a joint test is constructed for conditional heteroscedasticity in the error. The resulted test is asymptotically chi-squared under the null hypothesis and easy to implement. The small sample properties of the test are investigated by means of Monte Carlo experiments. The evidence shows that the test performs well in dynamic panel data with large number n of individuals and short periods T of time. A real data is analyzed for illustration.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号