首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 156 毫秒
1.
In this paper we present a review of population-based simulation for static inference problems. Such methods can be described as generating a collection of random variables {X n } n=1,…,N in parallel in order to simulate from some target density π (or potentially sequence of target densities). Population-based simulation is important as many challenging sampling problems in applied statistics cannot be dealt with successfully by conventional Markov chain Monte Carlo (MCMC) methods. We summarize population-based MCMC (Geyer, Computing Science and Statistics: The 23rd Symposium on the Interface, pp. 156–163, 1991; Liang and Wong, J. Am. Stat. Assoc. 96, 653–666, 2001) and sequential Monte Carlo samplers (SMC) (Del Moral, Doucet and Jasra, J. Roy. Stat. Soc. Ser. B 68, 411–436, 2006a), providing a comparison of the approaches. We give numerical examples from Bayesian mixture modelling (Richardson and Green, J. Roy. Stat. Soc. Ser. B 59, 731–792, 1997).  相似文献   

2.
In view of its ongoing importance for a variety of practical applications, feature selection via 1-regularization methods like the lasso has been subject to extensive theoretical as well empirical investigations. Despite its popularity, mere 1-regularization has been criticized for being inadequate or ineffective, notably in situations in which additional structural knowledge about the predictors should be taken into account. This has stimulated the development of either systematically different regularization methods or double regularization approaches which combine 1-regularization with a second kind of regularization designed to capture additional problem-specific structure. One instance thereof is the ‘structured elastic net’, a generalization of the proposal in Zou and Hastie (J. R. Stat. Soc. Ser. B 67:301–320, 2005), studied in Slawski et al. (Ann. Appl. Stat. 4(2):1056–1080, 2010) for the class of generalized linear models.  相似文献   

3.
This paper proposes a new probabilistic classification algorithm using a Markov random field approach. The joint distribution of class labels is explicitly modelled using the distances between feature vectors. Intuitively, a class label should depend more on class labels which are closer in the feature space, than those which are further away. Our approach builds on previous work by Holmes and Adams (J. R. Stat. Soc. Ser. B 64:295–306, 2002; Biometrika 90:99–112, 2003) and Cucala et al. (J. Am. Stat. Assoc. 104:263–273, 2009). Our work shares many of the advantages of these approaches in providing a probabilistic basis for the statistical inference. In comparison to previous work, we present a more efficient computational algorithm to overcome the intractability of the Markov random field model. The results of our algorithm are encouraging in comparison to the k-nearest neighbour algorithm.  相似文献   

4.
The cumulative incidence function provides intuitive summary information about competing risks data. Via a mixture decomposition of this function, Chang and Wang (Statist. Sinca 19:391–408, 2009) study how covariates affect the cumulative incidence probability of a particular failure type at a chosen time point. Without specifying the corresponding failure time distribution, they proposed two estimators and derived their large sample properties. The first estimator utilized the technique of weighting to adjust for the censoring bias, and can be considered as an extension of Fine’s method (J R Stat Soc Ser B 61: 817–830, 1999). The second used imputation and extends the idea of Wang (J R Stat Soc Ser B 65: 921–935, 2003) from a nonparametric setting to the current regression framework. In this article, when covariates take only discrete values, we extend both approaches of Chang and Wang (Statist Sinca 19:391–408, 2009) by allowing left truncation. Large sample properties of the proposed estimators are derived, and their finite sample performance is investigated through a simulation study. We also apply our methods to heart transplant survival data.  相似文献   

5.
We introduce a new family of skew-normal distributions that contains the skew-normal distributions introduced by Azzalini (Scand J Stat 12:171–178, 1985), Arellano-Valle et al. (Commun Stat Theory Methods 33(7):1465–1480, 2004), Gupta and Gupta (Test 13(2):501–524, 2008) and Sharafi and Behboodian (Stat Papers, 49:769–778, 2008). We denote this distribution by GBSN n 1, λ2). We present some properties of GBSN n 1, λ2) and derive the moment generating function. Finally, we use two numerical examples to illustrate the practical usefulness of this distribution.  相似文献   

6.
The goal of this paper is to introduce a partially adaptive estimator for the censored regression model based on an error structure described by a mixture of two normal distributions. The model we introduce is easily estimated by maximum likelihood using an EM algorithm adapted from the work of Bartolucci and Scaccia (Comput Stat Data Anal 48:821–834, 2005). A Monte Carlo study is conducted to compare the small sample properties of this estimator to the performance of some common alternative estimators of censored regression models including the usual tobit model, the CLAD estimator of Powell (J Econom 25:303–325, 1984), and the STLS estimator of Powell (Econometrica 54:1435–1460, 1986). In terms of RMSE, our partially adaptive estimator performed well. The partially adaptive estimator is applied to data on wife’s hours worked from Mroz (1987). In this application we find support for the partially adaptive estimator over the usual tobit model.  相似文献   

7.
In this paper, A variance decomposition approach to quantify the effects of endogenous and exogenous variables for nonlinear time series models is developed. This decomposition is taken temporally with respect to the source of variation. The methodology uses Monte Carlo methods to affect the variance decomposition using the ANOVA-like procedures proposed in Archer et al. (J. Stat. Comput. Simul. 58:99–120, 1997), Sobol’ (Math. Model. 2:112–118, 1990). The results of this paper can be used in investment problems, biomathematics and control theory, where nonlinear time series with multiple inputs are encountered.  相似文献   

8.
We propose a more efficient version of the slice sampler for Dirichlet process mixture models described by Walker (Commun. Stat., Simul. Comput. 36:45–54, 2007). This new sampler allows for the fitting of infinite mixture models with a wide-range of prior specifications. To illustrate this flexibility we consider priors defined through infinite sequences of independent positive random variables. Two applications are considered: density estimation using mixture models and hazard function estimation. In each case we show how the slice efficient sampler can be applied to make inference in the models. In the mixture case, two submodels are studied in detail. The first one assumes that the positive random variables are Gamma distributed and the second assumes that they are inverse-Gaussian distributed. Both priors have two hyperparameters and we consider their effect on the prior distribution of the number of occupied clusters in a sample. Extensive computational comparisons with alternative “conditional” simulation techniques for mixture models using the standard Dirichlet process prior and our new priors are made. The properties of the new priors are illustrated on a density estimation problem.  相似文献   

9.
We develop a Bayesian analysis for the class of Birnbaum–Saunders nonlinear regression models introduced by Lemonte and Cordeiro (Comput Stat Data Anal 53:4441–4452, 2009). This regression model, which is based on the Birnbaum–Saunders distribution (Birnbaum and Saunders in J Appl Probab 6:319–327, 1969a), has been used successfully to model fatigue failure times. We have considered a Bayesian analysis under a normal-gamma prior. Due to the complexity of the model, Markov chain Monte Carlo methods are used to develop a Bayesian procedure for the considered model. We describe tools for model determination, which include the conditional predictive ordinate, the logarithm of the pseudo-marginal likelihood and the pseudo-Bayes factor. Additionally, case deletion influence diagnostics is developed for the joint posterior distribution based on the Kullback–Leibler divergence. Two empirical applications are considered in order to illustrate the developed procedures.  相似文献   

10.
We deal with the double sampling plans by variables proposed by Bowker and Goode (Sampling Inspection by Variables, McGraw–Hill, New York, 1952) when the standard deviation is unknown. Using the procedure for the calculation of the OC given by Krumbholz and Rohr (Allg. Stat. Arch. 90:233–251, 2006), we present an optimization algorithm allowing to determine the ASN Minimax plan. This plan, among all double plans satisfying the classical two-point-condition on the OC, has the minimal ASN maximum.  相似文献   

11.
This paper considers the problem of hypothesis testing in a simple panel data regression model with random individual effects and serially correlated disturbances. Following Baltagi et al. (Econom. J. 11:554–572, 2008), we allow for the possibility of non-stationarity in the regressor and/or the disturbance term. While Baltagi et al. (Econom. J. 11:554–572, 2008) focus on the asymptotic properties and distributions of the standard panel data estimators, this paper focuses on testing of hypotheses in this setting. One important finding is that unlike the time-series case, one does not necessarily need to rely on the “super-efficient” type AR estimator by Perron and Yabu (J. Econom. 151:56–69, 2009) to make an inference in the panel data. In fact, we show that the simple t-ratio always converges to the standard normal distribution, regardless of whether the disturbances and/or the regressor are stationary.  相似文献   

12.
Singh et al. (Stat Trans 6(4):515–522, 2003) proposed a modified unrelated question procedure and they also demonstrated that the modified procedure is capable of producing a more efficient estimator of the population parameter π A , namely, the proportion of persons in a community bearing a sensitive character A when π A  < 0.50. The development of Singh et al. (Stat Trans 6(4):515–522, 2003) is based on simple random samples with replacement and on the assumption that π B , namely, the proportion of individuals bearing an unrelated innocuous character B is known. Due to these limitations, Singh et al.’s (Stat Trans 6(4):515–522, 2003) procedure cannot be used in practical surveys where usually the sample units are chosen with varying selection probabilities. In this article, following Singh et al. (Stat Trans 6(4):515–522, 2003) we propose an alternative RR procedure assuming that the population units are sampled with unequal selection probabilities and that the value of π B is unknown. A numerical example comparing the performance of the proposed RR procedure under alternative sampling designs is also reported.  相似文献   

13.
In empirical Bayes inference one is typically interested in sampling from the posterior distribution of a parameter with a hyper-parameter set to its maximum likelihood estimate. This is often problematic particularly when the likelihood function of the hyper-parameter is not available in closed form and the posterior distribution is intractable. Previous works have dealt with this problem using a multi-step approach based on the EM algorithm and Markov Chain Monte Carlo (MCMC). We propose a framework based on recent developments in adaptive MCMC, where this problem is addressed more efficiently using a single Monte Carlo run. We discuss the convergence of the algorithm and its connection with the EM algorithm. We apply our algorithm to the Bayesian Lasso of Park and Casella (J. Am. Stat. Assoc. 103:681–686, 2008) and on the empirical Bayes variable selection of George and Foster (J. Am. Stat. Assoc. 87:731–747, 2000).  相似文献   

14.
This note is on two theorems in a paper by Rainer Dyckerhoff (Allg. Stat. Arch. 88:163–190, 2004). We state a missing condition in Theorem 3. On the other hand, Theorem 2 can be weakened.  相似文献   

15.
In this article we develop a class of stochastic boosting (SB) algorithms, which build upon the work of Holmes and Pintore (Bayesian Stat. 8, Oxford University Press, Oxford, 2007). They introduce boosting algorithms which correspond to standard boosting (e.g. Bühlmann and Hothorn, Stat. Sci. 22:477–505, 2007) except that the optimization algorithms are randomized; this idea is placed within a Bayesian framework. We show that the inferential procedure in Holmes and Pintore (Bayesian Stat. 8, Oxford University Press, Oxford, 2007) is incorrect and further develop interpretational, computational and theoretical results which allow one to assess SB’s potential for classification and regression problems. To use SB, sequential Monte Carlo (SMC) methods are applied. As a result, it is found that SB can provide better predictions for classification problems than the corresponding boosting algorithm. A theoretical result is also given, which shows that the predictions of SB are not significantly worse than boosting, when the latter provides the best prediction. We also investigate the method on a real case study from machine learning.  相似文献   

16.
On MSE of EBLUP   总被引:1,自引:1,他引:0  
We consider Best Linear Unbiased Predictors (BLUPs) and Empirical Best Linear Unbiased Predictors (EBLUPs) under the general mixed linear model. The BLUP was proposed by Henderson (Ann Math Stat 21:309–310, 1950). The formula of this BLUP includes unknown elements of the variance-covariance matrix of random variables. If the elements in the formula of the BLUP proposed by Henderson (Ann Math Stat 21:309–310, 1950) are replaced by some type of estimators, we obtain the two-stage predictor called the EBLUP which is model-unbiased (Kackar and Harville in Commun Stat A 10:1249–1261, 1981). Kackar and Harville (J Am Stat Assoc 79:853–862, 1984) show an approximation of the mean square error (the MSE) of the predictor and propose an estimator of the MSE. The MSE and estimators of the MSE are also studied by Prasad and Rao (J Am Stat Assoc 85:163–171, 1990), Datta and Lahiri (Stat Sin 10:613–627, 2000) and Das et al. (Ann Stat 32(2):818–840, 2004). In the paper we consider the BLUP proposed by Royall (J Am Stat Assoc 71:657–473, 1976. Ża̧dło (On unbiasedness of some EBLU predictor. Physica-Verlag, Heidelberg, pp 2019–2026, 2004) shows that the BLUP proposed by Royall (J Am Stat Assoc 71:657–473, 1976) may be treated as a generalisation of the BLUP proposed by Henderson (Ann Math Stat 21:309–310, 1950) and proves model unbiasedness of the EBLUP based on the formula of the BLUP proposed by Royall (J Am Stat Assoc 71:657–473, 1976) under some assumptions. In this paper we derive the formula of the approximate MSE of the EBLUP and its estimators. We prove that the approximation of the MSE is accurate to terms o(D −1) and that the estimator of the MSE is approximately unbiased in the sense that its bias is o(D −1) under some assumptions, where D is the number of domains. The proof is based on the results obtained by Datta and Lahiri (Stat Sin 10:613–627, 2000). Using our results we show some EBLUP based on the special case of the general linear model. We also present the formula of its MSE and estimators of its MSE and their performance in Monte Carlo simulation study.   相似文献   

17.
The multivariate skew-t distribution (J Multivar Anal 79:93–113, 2001; J R Stat Soc, Ser B 65:367–389, 2003; Statistics 37:359–363, 2003) includes the Student t, skew-Cauchy and Cauchy distributions as special cases and the normal and skew–normal ones as limiting cases. In this paper, we explore the use of Markov Chain Monte Carlo (MCMC) methods to develop a Bayesian analysis of repeated measures, pretest/post-test data, under multivariate null intercept measurement error model (J Biopharm Stat 13(4):763–771, 2003) where the random errors and the unobserved value of the covariate (latent variable) follows a Student t and skew-t distribution, respectively. The results and methods are numerically illustrated with an example in the field of dentistry.  相似文献   

18.
A new procedure is proposed to estimate the jump location curve and surface in the two-dimensional (2D) and three-dimensional (3D) nonparametric jump regression models, respectively. In each of the 2D and 3D cases, our estimation procedure is motivated by the fact that, under some regularity conditions, the ridge location of the rotational difference kernel estimate (RDKE; Qiu in Sankhyā Ser. A 59, 268–294, 1997, and J. Comput. Graph. Stat. 11, 799–822, 2002; Garlipp and Müller in Sankhyā Ser. A 69, 55–86, 2007) obtained from the noisy image is asymptotically close to the jump location of the true image. Accordingly, a computational procedure based on the kernel smoothing method is designed to find the ridge location of RDKE, and the result is taken as the jump location estimate. The sequence relationship among the points comprising our jump location estimate is obtained. Our jump location estimate is produced without the knowledge of the range or shape of jump region. Simulation results demonstrate that the proposed estimation procedure can detect the jump location very well, and thus it is a useful alternative for estimating the jump location in each of the 2D and 3D cases.  相似文献   

19.
A fast new algorithm is proposed for numerical computation of (approximate) D-optimal designs. This cocktail algorithm extends the well-known vertex direction method (VDM; Fedorov in Theory of Optimal Experiments, 1972) and the multiplicative algorithm (Silvey et al. in Commun. Stat. Theory Methods 14:1379–1389, 1978), and shares their simplicity and monotonic convergence properties. Numerical examples show that the cocktail algorithm can lead to dramatically improved speed, sometimes by orders of magnitude, relative to either the multiplicative algorithm or the vertex exchange method (a variant of VDM). Key to the improved speed is a new nearest neighbor exchange strategy, which acts locally and complements the global effect of the multiplicative algorithm. Possible extensions to related problems such as nonparametric maximum likelihood estimation are mentioned.  相似文献   

20.
It is generally assumed that the likelihood ratio statistic for testing the null hypothesis that data arise from a homoscedastic normal mixture distribution versus the alternative hypothesis that data arise from a heteroscedastic normal mixture distribution has an asymptotic χ 2 reference distribution with degrees of freedom equal to the difference in the number of parameters being estimated under the alternative and null models under some regularity conditions. Simulations show that the χ 2 reference distribution will give a reasonable approximation for the likelihood ratio test only when the sample size is 2000 or more and the mixture components are well separated when the restrictions suggested by Hathaway (Ann. Stat. 13:795–800, 1985) are imposed on the component variances to ensure that the likelihood is bounded under the alternative distribution. For small and medium sample sizes, parametric bootstrap tests appear to work well for determining whether data arise from a normal mixture with equal variances or a normal mixture with unequal variances.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号