首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Multiplier bootstrap methods for conditional distributions   总被引:1,自引:0,他引:1  
The multiplier bootstrap is a fast and easy-to-implement alternative to the standard bootstrap; it has been used successfully in many statistical contexts. In this paper, resampling methods based on multipliers are proposed in a general framework where one investigates the stochastic behavior of a random vector \(\mathbf {Y}\in \mathbb {R}^d\) conditional on a covariate \(X \in \mathbb {R}\). Specifically, two versions of the multiplier bootstrap adapted to empirical conditional distributions are introduced as alternatives to the conditional bootstrap and their asymptotic validity is formally established. As the method walks hand-in-hand with the functional delta method, theory around the estimation of statistical functionals is developed accordingly; this includes the interval estimation of conditional mean and variance, conditional correlation coefficient, Kendall’s dependence measure and copula. Composite inference about univariate and joint conditional distributions is also considered. The sample behavior of the new bootstrap schemes and related estimation methods are investigated via simulations and an illustration on real data is provided.  相似文献   

2.
Specification tests for conditional heteroskedasticity that are derived under the assumption that the density of the innovation is Gaussian may not be powerful in light of the recent empirical results that the density is not Gaussian. We obtain specification tests for conditional heteroskedasticity under the assumption that the innovation density is a member of a general family of densities. Our test statistics maximize asymptotic local power and weighted average power criteria for the general family of densities. We establish both first-order and second-order theory for our procedures. Simulations indicate that asymptotic power gains are achievable in finite samples.  相似文献   

3.
The two-parameter gamma model is widely used in reliability, environmental, medical and other areas of statistics. It has a two-dimensional sufficient statistic, and a two-dimensional parameter which can be taken to describe shape and mean. This makes it closely comparable to the normal model, but it differs substantially in that the exact distribution for the minimal sufficient statistic is not available. Some recently developed asymptotic theory is used to derive an approximation for observed levels of significance and confidence intervals for the mean parameter of the model. The approximation is as easy to apply as first-order methods, and substantially more accurate.  相似文献   

4.
We compare Bayesian and sample theory model specification criteria. For the Bayesian criteria we use the deviance information criterion and the cumulative density of the mean squared errors of forecast. For the sample theory criterion we use the conditional Kolmogorov test. We use Markov chain Monte Carlo methods to obtain the Bayesian criteria and bootstrap sampling to obtain the conditional Kolmogorov test. Two non nested models we consider are the CIR and Vasicek models for spot asset prices. Monte Carlo experiments show that the DIC performs better than the cumulative density of the mean squared errors of forecast and the CKT. According to the DIC and the mean squared errors of forecast, the CIR model explains the daily data on uncollateralized Japanese call rate from January 1, 1990 to April 18, 1996; but according to the CKT, neither the CIR nor Vasicek models explains the daily data.  相似文献   

5.
Applied statistical decision theory has wide applications in decision-making fields of studies, such as economic, business management and industrial managements. In this work, following Pratt et al.’s [Introduction to statistical decision theory. 3rd ed. Cambridge, MA: The MIT Press; 2001] approach, we provide theoretical and practical formulations for the calculations of the key decision-making indices expected value of perfect information and expected value of sample information, whenever the unknown state appears to be the first-order autoregressive (AR) time series parameter that assumes a normal prior distribution. A practical procedure is furnished for calculating the decision-making indices. We treat the finite and infinite state spaces for the linear value functions and the quadratic opportunity losses. Interestingly our investigations on the distribution of the mean of the posterior distribution lead us to a general form for the corresponding statistic and its distribution, discussed by Reeves [The distribution of the maximum likelihood estimator of the parameter in the first-order AR series. Biometrika. 1972;59:387–394], Moschopoulos and Canada [The distribution function of a linear combination of chi-squares. Comput Math Appl. 1984;10:383–386], and Roychowdhury and Bhattacharya [On the performance of estimators of parameter in AR model of order one and optimal prediction under asymmetric loss. Model Assist Stat Appl. 2008;3:225–232].  相似文献   

6.
Many inference problems lead naturally to a marginal or conditional measure of departure that depends on a nuisance parameter. As a device for first-order elimination of the nuisance parameter, we suggest averaging with respect to an exact or approximate confidence distribution function. It is shown that for many standard problems where an exact answer is available by other methods, the averaging method reproduces the exact answer. Moreover, for the gamma-mean problem, where the exact answer is not explicitly available, the averaging method gives results that agree closely with those obtained from higher-order asymptotic methods. Examples are discussed; detailed asymptotic calculations will be examined elsewhere.  相似文献   

7.
This study examines the statistical process control chart used to detect a parameter shift with Poisson integer-valued GARCH (INGARCH) models and zero-inflated Poisson INGARCH models. INGARCH models have a conditional mean structure similar to GARCH models and are well known to be appropriate to analyzing count data that feature overdispersion. Special attention is paid in this study to conditional and general likelihood ratio-based (CLR and GLR) CUSUM charts and the score function-based CUSUM (SFCUSUM) chart. The performance of each of the proposed methods is evaluated through a simulation study, by calculating their average run length. Our findings show that the proposed methods perform adequately, and that the CLR chart outperforms the GLR chart when there is an increased shift of parameters. Moreover, the use of the SFCUSUM chart in particular is found to lead to a lower false alarm rate than the use of the CLR chart.  相似文献   

8.
Recent small sample studies of estimators for the shape parameter a of the negative binomial distribution (NBD) tend to indicate that the choice of estimator can be reduced to a choice between the method of moments estimator, maximum likelihood estimator (MLE), maximum quasi-likelihood estimator and the conditional likelihood estimator (CLE). In this paper the results of a comprehensive simulation study are reported to assist with the choice from these four estimators. The study includes a traditional procedure for assessing estimators for the shape parameter of the NBD and in addition introduces an alternative assessment procedure. Based on the traditional approach the CLE is considered to perform the best overall for the range of parameter values and sample sizes considered. The alternative assessment procedure indicates that the MLE is the preferred estimator.  相似文献   

9.
Inference for a scalar interest parameter in the presence of nuisance parameters is considered in terms of the conditional maximum-likelihood estimator developed by Cox and Reid (1987). Parameter orthogonality is assumed throughout. The estimator is analyzed by means of stochastic asymptotic expansions in three cases: a scalar nuisance parameter, m nuisance parameters from m independent samples, and a vector nuisance parameter. In each case, the expansion for the conditional maximum-likelihood estimator is compared with that for the usual maximum-likelihood estimator. The means and variances are also compared. In each of the cases, the bias of the conditional maximum-likelihood estimator is unaffected by the nuisance parameter to first order. This is not so for the maximum-likelihood estimator. The assumption of parameter orthogonality is crucial in attaining this result. Regardless of parametrization, the difference in the two estimators is first-order and is deterministic to this order.  相似文献   

10.
面板数据的自适应Lasso分位回归方法研究   总被引:1,自引:0,他引:1  
如何在对参数进行估计的同时自动选择重要解释变量,一直是面板数据分位回归模型中讨论的热点问题之一。通过构造一种含多重随机效应的贝叶斯分层分位回归模型,在假定固定效应系数先验服从一种新的条件Laplace分布的基础上,给出了模型参数估计的Gibbs抽样算法。考虑到不同重要程度的解释变量权重系数压缩程度应该不同,所构造的先验信息具有自适应性的特点,能够准确地对模型中重要解释变量进行自动选取,且设计的切片Gibbs抽样算法能够快速有效地解决模型中各个参数的后验均值估计问题。模拟结果显示,新方法在参数估计精确度和变量选择准确度上均优于现有文献的常用方法。通过对中国各地区多个宏观经济指标的面板数据进行建模分析,演示了新方法估计参数与挑选变量的能力。  相似文献   

11.
This paper is concerned with the problem of obtaining the conditional confidence intervals for the parameters and reliability of the inverse Weibull distribution based on censored generalized order statistics, which are more general than the existing results in the literature. The coverage rate and the mean length of intervals have been obtained for different values of the shape parameter, via Monte Carlo simulation. Finally a numerical example is given to illustrate the inferential methods developed in this paper.  相似文献   

12.
Successive tests of hypotheses, as exemplified with an analysis of variance table, impose a set theoretic structure on the parameter space and yet allow much arbitrariness in the definition of nuisance parameters. Two major types of statistical model, the exponential and transformation, are shown to have by basic theory well defined conditional testing procedures. The two types of testing procedure are then shown to have opposite forms of set theoretic structure on the sample space, and to differ sharply from the commonly used deviance or likelihood drop methods. The two types of model have the normal linear model as the intersection model and the two opposite forms of testing procedure manage to coincide by product space structure and independence. Details of the two types of testing procedure are discussed, related to the arbitrariness in nuisance parameter definition, and organized to provide a general-case pattern for the development of conditional procedures as an alternative to the default likelihood-drop methods.  相似文献   

13.
This paper is concerned with testing the presence of ARCH within the ARCH-M model as the alternative hypothesis. Standard testing procedures are inapplicable since a nuisance parameter is unidentified under the null hypothesis. Nonetheless, the diagnostic tests for the presence of the conditional variance is very important since any misspecification in the conditional variance equation leads to inconsistent estimates of the conditional mean parameters. BTo resolve the problem of unidentified nuisance parameter, 'Ne apply Davies' approach, and investigate its finite sample performance through a Monte Carlo study.  相似文献   

14.
Sample size calculations in clinical trials need to be based on profound parameter assumptions. Wrong parameter choices may lead to too small or too high sample sizes and can have severe ethical and economical consequences. Adaptive group sequential study designs are one solution to deal with planning uncertainties. Here, the sample size can be updated during an ongoing trial based on the observed interim effect. However, the observed interim effect is a random variable and thus does not necessarily correspond to the true effect. One way of dealing with the uncertainty related to this random variable is to include resampling elements in the recalculation strategy. In this paper, we focus on clinical trials with a normally distributed endpoint. We consider resampling of the observed interim test statistic and apply this principle to several established sample size recalculation approaches. The resulting recalculation rules are smoother than the original ones and thus the variability in sample size is lower. In particular, we found that some resampling approaches mimic a group sequential design. In general, incorporating resampling of the interim test statistic in existing sample size recalculation rules results in a substantial performance improvement with respect to a recently published conditional performance score.  相似文献   

15.
ABSTRACT

The conditional density offers the most informative summary of the relationship between explanatory and response variables. We need to estimate it in place of the simple conditional mean when its shape is not well-behaved. A motivation for estimating conditional densities, specific to the circular setting, lies in the fact that a natural alternative of it, like quantile regression, could be considered problematic because circular quantiles are not rotationally equivariant. We treat conditional density estimation as a local polynomial fitting problem as proposed by Fan et al. [Estimation of conditional densities and sensitivity measures in nonlinear dynamical systems. Biometrika. 1996;83:189–206] in the Euclidean setting, and discuss a class of estimators in the cases when the conditioning variable is either circular or linear. Asymptotic properties for some members of the proposed class are derived. The effectiveness of the methods for finite sample sizes is illustrated by simulation experiments and an example using real data.  相似文献   

16.
This paper is concerned with testing the presence of ARCH within the ARCH-M model as the alternative hypothesis. Standard testing procedures are inapplicable since a nuisance parameter is unidentified under the null hypothesis. Nonetheless, the diagnostic tests for the presence of the conditional variance is very important since any misspecification in the conditional variance equation leads to inconsistent estimates of the conditional mean parameters. BTo resolve the problem of unidentified nuisance parameter, ‘Ne apply Davies’ approach, and investigate its finite sample performance through a Monte Carlo study.  相似文献   

17.
The article focuses mainly on a conditional imputation algorithm of quantile-filling to analyze a new kind of censored data, mixed interval-censored and complete data related to interval-censored sample. With the algorithm, the imputed failure times, which are the conditional quantiles, are obtained within the censoring intervals in which some exact failure times are. The algorithm is viable and feasible for the parameter estimation with general distributions, for instance, a case of Weibull distribution that has a moment estimation of closed form by log-transformation. Furthermore, interval-censored sample is a special case of the new censored sample, and the conditional imputation algorithm can also be used to deal with the failure data of interval censored. By comparing the interval-censored data and the new censored data, using the imputation algorithm, in the view of the bias of estimation, we find that the performance of new censored data is better than that of interval censored.  相似文献   

18.
Coefficient estimation in linear regression models with missing data is routinely carried out in the mean regression framework. However, the mean regression theory breaks down if the error variance is infinite. In addition, correct specification of the likelihood function for existing imputation approach is often challenging in practice, especially for skewed data. In this paper, we develop a novel composite quantile regression and a weighted quantile average estimation procedure for parameter estimation in linear regression models when some responses are missing at random. Instead of imputing the missing response by randomly drawing from its conditional distribution, we propose to impute both missing and observed responses by their estimated conditional quantiles given the observed data and to use the parametrically estimated propensity scores to weigh check functions that define a regression parameter. Both estimation procedures are resistant to heavy‐tailed errors or outliers in the response and can achieve nice robustness and efficiency. Moreover, we propose adaptive penalization methods to simultaneously select significant variables and estimate unknown parameters. Asymptotic properties of the proposed estimators are carefully investigated. An efficient algorithm is developed for fast implementation of the proposed methodologies. We also discuss a model selection criterion, which is based on an ICQ ‐type statistic, to select the penalty parameters. The performance of the proposed methods is illustrated via simulated and real data sets.  相似文献   

19.
The Zero Inflated Power Series Distribution (ZIPSD) contains two parameters. The first parameter indicates inflation of zero and the other parameter is that of the Power Series distribution. We provide three asymptotic tests for testing the parameter of Power Series distribution, using an unconditional (standard) likelihood approach, a conditional likelihood approach and a test based on sample mean, respectively. The performance of these three tests has been studied for Zero Inflated Poisson Distribution (ZIPD). Asymptotic Confidence Intervals for the parameter are also provided.  相似文献   

20.
This paper develops a new class of option price models and applies it to options on the Australian S&P200 Index. The class of models generalizes the traditional Black‐Scholes framework by accommodating time‐varying conditional volatility, skewness and excess kurtosis in the underlying returns process. An important property of these more general pricing models is that the computational requirements are essentially the same as those associated with the Black‐Scholes model, with both methods being based on one‐dimensional integrals. Bayesian inferential methods are used to evaluate a range of models nested in the general framework, using observed market option prices. The evaluation is based on posterior parameter distributions, as well as posterior model probabilities. Various fit and predictive measures, plus implied volatility graphs, are also used to rank the alternative models. The empirical results provide evidence that time‐varying volatility, leptokurtosis and a small degree of negative skewness are priced in Australian stock market options.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号