共查询到20条相似文献,搜索用时 7 毫秒
1.
With reference to the problem of interval estimation of a population mean under model uncertainty, we compare approaches based on robust and empirical statistics via expected lengths of the associated confidence intervals. An explicit expression for confidence intervals arising from a general class of robust statistics is worked out and this is employed to obtain a higher order asymptotic formula for the expected lengths of such intervals. Comparative theoretical results, as well as a simulation study, are then presented. 相似文献
2.
It is cleared in recent researches that the raising of missing values in datasets is inevitable. Imputation of missing data is one of the several methods which have been introduced to overcome this issue. Imputation techniques are trying to answer the case of missing data by covering missing values with reasonable estimates permanently. There are a lot of benefits for these procedures rather than their drawbacks. The operation of these methods has not been clarified, which means that they provide mistrust among analytical results. One approach to evaluate the outcomes of the imputation process is estimating uncertainty in the imputed data. Nonparametric methods are appropriate to estimating the uncertainty when data are not followed by any particular distribution. This paper deals with a nonparametric method for estimation and testing the significance of the imputation uncertainty, which is based on Wilcoxon test statistic, and which could be employed for estimating the precision of the imputed values created by imputation methods. This proposed procedure could be employed to judge the possibility of the imputation process for datasets, and to evaluate the influence of proper imputation methods when they are utilized to the same dataset. This proposed approach has been compared with other nonparametric resampling methods, including bootstrap and jackknife to estimate uncertainty in the imputed data under the Bayesian bootstrap imputation method. The ideas supporting the proposed method are clarified in detail, and a simulation study, which indicates how the approach has been employed in practical situations, is illustrated. 相似文献
3.
Combining information from multiple surveys by using regression for efficient small domain estimation 总被引:1,自引:0,他引:1
Takis Merkouris 《Journal of the Royal Statistical Society. Series B, Statistical methodology》2010,72(1):27-48
Summary. In sample surveys of finite populations, subpopulations for which the sample size is too small for estimation of adequate precision are referred to as small domains. Demand for small domain estimates has been growing in recent years among users of survey data. We explore the possibility of enhancing the precision of domain estimators by combining comparable information collected in multiple surveys of the same population. For this, we propose a regression method of estimation that is essentially an extended calibration procedure whereby comparable domain estimates from the various surveys are calibrated to each other. We show through analytic results and an empirical study that this method may greatly improve the precision of domain estimators for the variables that are common to these surveys, as these estimators make effective use of increased sample size for the common survey items. The design-based direct estimators proposed involve only domain-specific data on the variables of interest. This is in contrast with small domain (mostly small area) indirect estimators, based on a single survey, which incorporate through modelling data that are external to the targeted small domains. The approach proposed is also highly effective in handling the closely related problem of estimation for rare population characteristics. 相似文献
4.
Paul Blackwell 《Statistics and Computing》1994,4(3):213-218
This paper describes a conditional simulation technique which can be used to estimate probabilities associated with the distribution of the maximum of a real-valued process which can be written in the form of a moving average. The class of processes to which the technique applies includes non-stationary and spatial processes, and autoregressive processes. The technique is shown to achieve a considerable variance reduction compared with the obvious simulation-based estimator, particularly for estimating small upper-tail probabilities. 相似文献
5.
The theoretical price of a financial option is given by the expectation of its discounted expiry time payoff. The computation of this expectation depends on the density of the value of the underlying instrument at expiry time. This density depends on both the parametric model assumed for the behaviour of the underlying, and the values of parameters within the model, such as volatility. However neither the model, nor the parameter values are known. Common practice when pricing options is to assume a specific model, such as geometric Brownian Motion, and to use point estimates of the model parameters, thereby precisely defining a density function.We explicitly acknowledge the uncertainty of model and parameters by constructing the predictive density of the underlying as an average of model predictive densities, weighted by each model's posterior probability. A model's predictive density is constructed by integrating its transition density function by the posterior distribution of its parameters. This is an extension to Bayesian model averaging. Sampling importance-resampling and Monte Carlo algorithms implement the computation. The advantage of this method is that rather than falsely assuming the model and parameter values are known, inherent ignorance is acknowledged and dealt with in a mathematically logical manner, which utilises all information from past and current observations to generate and update option prices. Moreover point estimates for parameters are unnecessary. We use this method to price a European Call option on a share index. 相似文献
6.
《Journal of Statistical Computation and Simulation》2012,82(9):1931-1945
Markov chain Monte Carlo methods, in particular, the Gibbs sampler, are widely used algorithms both in application and theoretical works in the classical and Bayesian paradigms. However, these algorithms are often computer intensive. Samawi et al. [Steady-state ranked Gibbs sampler. J. Stat. Comput. Simul. 2012;82(8), 1223–1238. doi:10.1080/00949655.2011.575378] demonstrate through theory and simulation that the dependent steady-state Gibbs sampler is more efficient and accurate in model parameter estimation than the original Gibbs sampler. This paper proposes the independent steady-state Gibbs sampler (ISSGS) approach to improve the original Gibbs sampler in multidimensional problems. It is demonstrated that ISSGS provides accuracy with unbiased estimation and improves the performance and convergence of the Gibbs sampler in multidimensional problems. 相似文献
7.
8.
David Bauder Rostyslav Bodnar Taras Bodnar Wolfgang Schmid 《Scandinavian Journal of Statistics》2019,46(3):802-830
In this paper, we consider the estimation of the three determining parameters of the efficient frontier, the expected return, and the variance of the global minimum variance portfolio and the slope parameter, from a Bayesian perspective. Their posterior distribution is derived by assigning the diffuse and the conjugate priors to the mean vector and the covariance matrix of the asset returns and is presented in terms of a stochastic representation. Furthermore, Bayesian estimates together with the standard uncertainties for all three parameters are provided, and their asymptotic distributions are established. All obtained findings are applied to real data, consisting of the returns on assets included into the S&P 500. The empirical properties of the efficient frontier are then examined in detail. 相似文献
9.
Steven G. From 《统计学通讯:模拟与计算》2013,42(4):1073-1084
This paper is concerned with estimating θ, the mean of an exponential distribution under a single outlier exchangeable model. It is a.ssumed that the single outlying observation is also exponential with mean θ/α, where 0 < α < 1. The estirnators proposed are weighted averages of the order statistics. The formulas for the weights minimizing the mean square error are presented. These weights are calculated for certain combinations of the sample size n and of α. It is found that the optimal weights very nearly have a certain form. The mean square errors of a simplified estitnator are compared lo those of Joshi (1972, 1988) and of Clhikkagoudar and Kunchur (1980). A nlodification of Joshi's iterative procedure is suggested. 相似文献
10.
Zhongyang Sun 《统计学通讯:理论与方法》2013,42(18):4511-4527
AbstractIn this paper, we investigate some ruin problems for risk models that contain uncertainties on both claim frequency and claim size distribution. The problems naturally lead to the evaluation of ruin probabilities under the so-called G-expectation framework. We assume that the risk process is described as a class of G-compound Poisson process, a special case of the G-Lévy process. By using the exponential martingale approach, we obtain the upper bounds for the two-sided ruin probability as well as the ruin probability involving investment. Furthermore, we derive the optimal investment strategy under the criterion of minimizing this upper bound. Finally, we conclude that the upper bound in the case with investment is less than or equal to the case without investment. 相似文献
11.
A key challenge in rainfall estimation is spatio-temporal variablility. Weather radars are used to estimate precipitation with high spatial and temporal resolution. Due to the inherent errors in radar estimates, spatial interpolation has been often employed to calibrate the estimates. Kriging is a simple and popular spatial interpolation method, but the method has several shortcomings. In particular, the prediction is quite unstable and often fails to be performed when sample size is small. In this paper, we proposed a flexible and efficient spatial interpolator for radar rainfall estimation, with several advantages over kriging. The method is illustrated using a real-world data set. 相似文献
12.
There are various methods to estimate the parameters in the binormal model for the ROC curve. In this paper, we propose a conceptually simple and computationally feasible Bayesian estimation method using a rank-based likelihood. Posterior consistency is also established. We compare the new method with other estimation methods and conclude that our estimator generally performs better than its competitors. 相似文献
13.
In this study, we evaluate several forms of both Akaike-type and Information Complexity (ICOMP)-type information criteria, in the context of selecting an optimal subset least squares ratio (LSR) regression model. Our simulation studies are designed to mimic many characteristics present in real data – heavy tails, multicollinearity, redundant variables, and completely unnecessary variables. Our findings are that LSR in conjunction with one of the ICOMP criteria is very good at selecting the true model. Finally, we apply these methods to the familiar body fat data set. 相似文献
14.
Durbin's (1959) efficient method for the estimation of univariate moving average models is generalized to the vector case. Strong consistency and asymptotic normality of the estimator is proved. A simulation experiment is performed to illustrate the behaviour of the method in finite samples. 相似文献
15.
Sven Schreiber 《Econometric Reviews》2019,38(3):279-300
The topic of this article is the estimation uncertainty of the Stock–Watson and Gonzalo–Granger permanent-transitory decompositions in the framework of the co-integrated vector autoregression. We suggest an approach to construct the confidence interval of the transitory component estimate in a given period (e.g., the latest observation) by conditioning on the observed data in that period. To calculate asymptotically valid confidence intervals, we use the delta method and two bootstrap variants. As an illustration, we analyze the uncertainty of (U.S.) output gap estimates in a system of output, consumption, and investment. 相似文献
16.
For decision purpose, one of the commonly used statistical applications is the comparison of two or more objects or characteristics. Sometimes, it is not possible to compare the objects at a time or when the number of objects under study is large and the differences between the objects become small, then a useful way is to compare them in pairwise manner. Because of its practical nature, the fields in which paired comparison techniques are being used are numerous. Many Bayesian statisticians have focused their attention on the practical and usable paired comparison technique and have successfully performed the Bayesian study of many of the paired comparison models. In the current study, analysis of the amended Davidson model (ADM) which has been extended after incorporating the order effect parameter is narrated. For this intention, both the informative and non informative priors are used. The said model is studied for the case of four treatments which are compared pairwise. 相似文献
17.
We examine the issue of asymptotic efficiency of estimation for response adaptive designs of clinical trials, from which the collected data set contains a dependency structure. We establish the asymptotic lower bound of exponential rates for consistent estimators. Under certain regularity conditions, we show that the maximum likelihood estimator achieves the asymptotic lower bound for response adaptive trials with dichotomous responses. Furthermore, it is shown that the maximum likelihood estimator of the treatment effect is asymptotically efficient in the Bahadur sense for response adaptive clinical trials. 相似文献
18.
Receiver operating characteristic (ROC) curve has been widely used in medical diagnosis. Various methods are proposed to estimate ROC curve parameters under the binormal model. In this paper, we propose a Bayesian estimation method from the continuously distributed data which is constituted by the truth-state-runs in the rank-ordered data. By using an ordinal category data likelihood and following the Metropolis–Hastings (M–H) procedure, we compute the posterior distribution of the binormal parameters, as well as the group boundaries parameters. Simulation studies and real data analysis are conducted to evaluate our Bayesian estimation method. 相似文献
19.
Conventional multiclass conditional probability estimation methods, such as Fisher's discriminate analysis and logistic regression, often require restrictive distributional model assumption. In this paper, a model-free estimation method is proposed to estimate multiclass conditional probability through a series of conditional quantile regression functions. Specifically, the conditional class probability is formulated as a difference of corresponding cumulative distribution functions, where the cumulative distribution functions can be converted from the estimated conditional quantile regression functions. The proposed estimation method is also efficient as its computation cost does not increase exponentially with the number of classes. The theoretical and numerical studies demonstrate that the proposed estimation method is highly competitive against the existing competitors, especially when the number of classes is relatively large. 相似文献
20.
Two discrete-time insurance models are studied in the framework of cost approach. The models being non-deterministic one deals with decision making under uncertainty. Three different situations are investigated: (1) underlying processes are stochastic however their probability distributions are given; (2) information concerning the distribution laws is incomplete; (3) nothing is known about the processes under consideration. Mathematical methods useful for establishing the (asymptotically) optimal control are demonstrated in each case. Algorithms for calculation of critical levels are proposed. Numerical results are presented as well. 相似文献