首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
ABSTRACT

We present an adaptive method for the automatic scaling of random-walk Metropolis–Hastings algorithms, which quickly and robustly identifies the scaling factor that yields a specified overall sampler acceptance probability. Our method relies on the use of the Robbins–Monro search process, whose performance is determined by an unknown steplength constant. Based on theoretical considerations we give a simple estimator of this constant for Gaussian proposal distributions. The effectiveness of our method is demonstrated with both simulated and real data examples.  相似文献   

2.
ABSTRACT

A common Bayesian hierarchical model is where high-dimensional observed data depend on high-dimensional latent variables that, in turn, depend on relatively few hyperparameters. When the full conditional distribution over latent variables has a known form, general MCMC sampling need only be performed on the low-dimensional marginal posterior distribution over hyperparameters. This improves on popular Gibbs sampling that computes over the full space. Sampling the marginal posterior over hyperparameters exhibits good scaling of compute cost with data size, particularly when that distribution depends on a low-dimensional sufficient statistic.  相似文献   

3.
Abstract

In this paper we consider the wavelet-based estimation of density derivatives. The multiscale density derivative estimator is proposed which is constructed by using a number of scaling functions. Asymptotic theory is developed in which asymptotic expressions for the bias, the variance and the mean integrated squared error are included. In addition, asymptotic normality of the proposed estimator is proved. Theoretical and numerical comparisons with the usual kernel-based estimators are also reported.  相似文献   

4.
ABSTRACT

In this article we study the approximately unbiased multi-level pseudo maximum likelihood (MPML) estimation method for general multi-level modeling with sampling weights. We conduct a simulation study to determine the effect various factors have on the estimation method. The factors we included in this study are scaling method, size of clusters, invariance of selection, informativeness of selection, intraclass correlation, and variability of standardized weights. The scaling method is an indicator of how the weights are normalized on each level. The invariance of the selection is an indicator of whether or not the same selection mechanism is applied across clusters. The informativeness of the selection is an indicator of how biased the selection is. We summarize our findings and recommend a multi-stage procedure based on the MPML method that can be used in practical applications.  相似文献   

5.
《随机性模型》2013,29(2):205-227
Abstract

Extremal dependence analysis assesses the tendency of large values of components of a random vector to occur simultaneously. This kind of dependence information can be qualitatively different than what is given by correlation which averages over the total body of the joint distribution. Also, correlation may be completely inappropriate for heavy tailed data. We study the extremal dependence measure (EDM), a measure of the tendency of large values of components of a random vector to occur simultaneously and show consistency of an estimator of the EDM. We also show asymptotic normality of an idealized estimator in a restricted case of multivariate regular variation where scaling functions do not have to be estimated.  相似文献   

6.
ABSTRACT

A statistical test can be seen as a procedure to produce a decision based on observed data, where some decisions consist of rejecting a hypothesis (yielding a significant result) and some do not, and where one controls the probability to make a wrong rejection at some prespecified significance level. Whereas traditional hypothesis testing involves only two possible decisions (to reject or not a null hypothesis), Kaiser’s directional two-sided test as well as the more recently introduced testing procedure of Jones and Tukey, each equivalent to running two one-sided tests, involve three possible decisions to infer the value of a unidimensional parameter. The latter procedure assumes that a point null hypothesis is impossible (e.g., that two treatments cannot have exactly the same effect), allowing a gain of statistical power. There are, however, situations where a point hypothesis is indeed plausible, for example, when considering hypotheses derived from Einstein’s theories. In this article, we introduce a five-decision rule testing procedure, equivalent to running a traditional two-sided test in addition to two one-sided tests, which combines the advantages of the testing procedures of Kaiser (no assumption on a point hypothesis being impossible) and Jones and Tukey (higher power), allowing for a nonnegligible (typically 20%) reduction of the sample size needed to reach a given statistical power to get a significant result, compared to the traditional approach.  相似文献   

7.
Abstract

A new symmetric heavy-tailed distribution, namely gamma mixture of generalized error distribution is defined by scaling generalized error distribution with gamma distribution, its probability density function, k-moment, skewness and kurtosis are derived. After tedious calculation, we also give the Fisher information matrix, moment estimators and maximum likelihood estimators for the parameters of gamma mixture of generalized error distribution. In order to evaluate the effectiveness of the point estimators and the stability of Fisher information matrix, extensive simulation experiments are carried out in three groups of parameters. Additionally, the new distribution is applied to Apple Inc. stock (AAPL) data and compared with normal distribution, F-S skewed standardized t distribution and generalized error distribution. It is found that the new distribution has better fitting effect on the data under the Akaike information criterion (AIC). To a certain extent, our results enrich the probability distribution theory and develop the scale mixture distribution, which will provide help and reference for financial data analysis.  相似文献   

8.
Abstract

Goodness-of-fit testing is addressed in the stratified proportional hazards model for survival data. A test statistic based on within-strata cumulative sums of martingale residuals over covariates is proposed and its asymptotic distribution is derived under the null hypothesis of model adequacy. A Monte Carlo procedure is proposed to approximate the critical value of the test. Simulation studies are conducted to examine finite-sample performance of the proposed statistic.  相似文献   

9.
ABSTRACT

Factor analysis (FA) is the most commonly used pattern recognition methodology in social and health research. A technique that may help to better retrieve true information from FA is the rotation of the information axes. The main goal is to test the reliability of the results derived through FA and to reveal the best rotation method under various scenarios. Based on the results of the simulations, it was observed that when applying non-orthogonal rotation, the results were more repeatable as compared to the orthogonal rotation, and, when no rotation was applied.  相似文献   

10.
The hierarchically orthogonal functional decomposition of any measurable function η of a random vector X=(X1,?…?, Xp) consists in decomposing η(X) into a sum of increasing dimension functions depending only on a subvector of X. Even when X1,?…?, Xp are assumed to be dependent, this decomposition is unique if the components are hierarchically orthogonal. That is, two of the components are orthogonal whenever all the variables involved in one of the summands are a subset of the variables involved in the other. Setting Y=η(X), this decomposition leads to the definition of generalized sensitivity indices able to quantify the uncertainty of Y due to each dependent input in X [Chastaing G, Gamboa F, Prieur C. Generalized Hoeffding–Sobol decomposition for dependent variables – application to sensitivity analysis. Electron J Statist. 2012;6:2420–2448]. In this paper, a numerical method is developed to identify the component functions of the decomposition using the hierarchical orthogonality property. Furthermore, the asymptotic properties of the components estimation is studied, as well as the numerical estimation of the generalized sensitivity indices of a toy model. Lastly, the method is applied to a model arising from a real-world problem.  相似文献   

11.
12.
A Gaussian process (GP) can be thought of as an infinite collection of random variables with the property that any subset, say of dimension n, of these variables have a multivariate normal distribution of dimension n, mean vector β and covariance matrix Σ [O'Hagan, A., 1994, Kendall's Advanced Theory of Statistics, Vol. 2B, Bayesian Inference (John Wiley & Sons, Inc.)]. The elements of the covariance matrix are routinely specified through the multiplication of a common variance by a correlation function. It is important to use a correlation function that provides a valid covariance matrix (positive definite). Further, it is well known that the smoothness of a GP is directly related to the specification of its correlation function. Also, from a Bayesian point of view, a prior distribution must be assigned to the unknowns of the model. Therefore, when using a GP to model a phenomenon, the researcher faces two challenges: the need of specifying a correlation function and a prior distribution for its parameters. In the literature there are many classes of correlation functions which provide a valid covariance structure. Also, there are many suggestions of prior distributions to be used for the parameters involved in these functions. We aim to investigate how sensitive the GPs are to the (sometimes arbitrary) choices of their correlation functions. For this, we have simulated 25 sets of data each of size 64 over the square [0, 5]×[0, 5] with a specific correlation function and fixed values of the GP's parameters. We then fit different correlation structures to these data, with different prior specifications and check the performance of the adjusted models using different model comparison criteria.  相似文献   

13.
The Buckley–James estimator (BJE) [J. Buckley and I. James, Linear regression with censored data, Biometrika 66 (1979), pp. 429–436] has been extended from right-censored (RC) data to interval-censored (IC) data by Rabinowitz et al. [D. Rabinowitz, A. Tsiatis, and J. Aragon, Regression with interval-censored data, Biometrika 82 (1995), pp. 501–513]. The BJE is defined to be a zero-crossing of a modified score function H(b), a point at which H(·) changes its sign. We discuss several approaches (for finding a BJE with IC data) which are extensions of the existing algorithms for RC data. However, these extensions may not be appropriate for some data, in particular, they are not appropriate for a cancer data set that we are analysing. In this note, we present a feasible iterative algorithm for obtaining a BJE. We apply the method to our data.  相似文献   

14.
This article proposes a multivariate control chart, the syn-|S| chart, which comprises a standard |S| subchart and a multivariate synthetic sample generalized variance |S| (synthetic |S|) subchart, for detecting shifts in the covariance matrix of a multivariate normally distributed process. A procedure for the optimal design of the syn-|S| chart by minimizing the average extra quadratic loss is provided. The syn-|S| chart has better overall performance compared to the synthetic |S| chart and the standard |S| chart, based on the zero-state and steady-state modes. An example is given to illustrate the operation of the synthetic |S| chart.  相似文献   

15.
ABSTRACT

In this article, we develop a new method, called regenerative randomization, for the transient analysis of continuous time Markov models with absorbing states. The method has the same good properties as standard randomization: numerical stability, well-controlled computation error, and ability to specify the computation error in advance. The method has a benign behavior for large t and is significantly less costly than standard randomization for large enough models and large enough t. For a class of models, class C, including typical failure/repair reliability models with exponential failure and repair time distributions and repair in every state with failed components, stronger theoretical results are available assessing the efficiency of the method in terms of “visible” model characteristics. A large example belonging to that class is used to illustrate the performance of the method and to show that it can indeed be much faster than standard randomization.  相似文献   

16.
Abstract

We propose a formal definition of transparency in empirical research and apply it to structural estimation in economics. We discuss how some existing practices can be understood as attempts to improve transparency, and we suggest ways to improve current practice, emphasizing approaches that impose a minimal computational burden on the researcher. We illustrate with examples.  相似文献   

17.
ABSTRACT

The identification of the out of control variable, or variables, after a multivariate control chart signals, is an appealing subject for many researchers in the last years. In this paper we propose a new method for approaching this problem based on principal components analysis. Theoretical control limits are derived and a detailed investigation of the properties and the limitations of the new method is given. A graphical technique which can be applied in some of these limiting situations is also provided.  相似文献   

18.
Abstract

In this article we propose some extensions and applications of the nonparametric combination of dependent rankings (see Pesarin, F., Lago, A. (2000). Nonparametric combination of department rankings with applications to the quality assessment of industrial products. Metron LVIII (1–2):39–52.) This methodology is applied to Conjoint Analysis in order to aggregate (ex ante) preferences from a group of individuals. Furthermore, a new global association test (GAT) is introduced in order to test for the association of the global ranking with all attributes of interest. The GAT procedure allows the experimenter to have clear indications on significant attributes by considering the intensity of the optimal weights given by the procedure itself. This may help the experimenter in interpreting the usual analysis involving the normal plot for detecting active effects.  相似文献   

19.
ABSTRACT

In a model of the form Y = h(X1, …, Xd) where the goal is to estimate a parameter of the probability distribution of Y, we define new sensitivity indices which quantify the importance of each variable Xi with respect to this parameter of interest. The aim of this paper is to define goal oriented sensitivity indices and we will show that Sobol indices are sensitivity indices associated to a particular characteristic of the distribution Y. We name the framework we present as Goal Oriented Sensitivity Analysis (GOSA).  相似文献   

20.
ABSTRACT

In survival analysis, individuals may fail due to multiple causes of failure called competing risks setting. Parametric models such as Weibull model are not improper that ignore the assumption of multiple failure times. In this study, a novel extension of Weibull distribution is proposed which is improper and then can incorporate to the competing risks framework. This model includes the original Weibull model before a pre-specified time point and an exponential form for the tail of the time axis. A Bayesian approach is used for parameter estimation. A simulation study is performed to evaluate the proposed model. The conducted simulation study showed identifiability and appropriate convergence of the proposed model. The proposed model and the 3-parameter Gompertz model, another improper parametric distribution, are fitted to the acute lymphoblastic leukemia dataset.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号