首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The Self-Healing Umbrella Sampling (SHUS) algorithm is an adaptive biasing algorithm which has been proposed in Marsili et al. (J Phys Chem B 110(29):14011–14013, 2006) in order to efficiently sample a multimodal probability measure. We show that this method can be seen as a variant of the well-known Wang–Landau algorithm Wang and Landau (Phys Rev E 64:056101, 2001a; Phys Rev Lett 86(10):2050–2053, 2001b). Adapting results on the convergence of the Wang-Landau algorithm obtained in Fort et al. (Math Comput 84(295):2297–2327, 2014a), we prove the convergence of the SHUS algorithm. We also compare the two methods in terms of efficiency. We finally propose a modification of the SHUS algorithm in order to increase its efficiency, and exhibit some similarities of SHUS with the well-tempered metadynamics method Barducci et al. (Phys Rev Lett 100:020,603, 2008).  相似文献   

2.
In this work we provide a decomposition by sources of the inequality index \(\zeta \) defined by Zenga (Giornale degli Economisti e Annali di economia 43(5–6):301–326, 1984). The source contributions are obtained with the method proposed in Zenga et al. (Stat Appl X(1):3–31, 2012) and Zenga (Stat Appl XI(2):133–161, 2013), that allows to compare different inequality measures. This method is based on the decomposition of inequality curves. To apply this decomposition to the index \(\zeta \) and its inequality curve, we adapt the method to the “cograduation” table. Moreover, we consider the case of linear transformation of sources and analyse the corresponding results.  相似文献   

3.
The computation of penalized quantile regression estimates is often computationally intensive in high dimensions. In this paper we propose a coordinate descent algorithm for computing the penalized smooth quantile regression (cdaSQR) with convex and nonconvex penalties. The cdaSQR approach is based on the approximation of the objective check function, which is not differentiable at zero, by a modified check function which is differentiable at zero. Then, using the maximization-minimization trick of the gcdnet algorithm (Yang and Zou in, J Comput Graph Stat 22(2):396–415, 2013), we update each coefficient simply and efficiently. In our implementation, we consider the convex penalties \(\ell _1+\ell _2\) and the nonconvex penalties SCAD (or MCP) \(+ \ell _2\). We establishe the convergence property of the csdSQR with \(\ell _1+\ell _2\) penalty. The numerical results show that our implementation is an order of magnitude faster than its competitors. Using simulations we compare the speed of our algorithm to its competitors. Finally, the performance of our algorithm is illustrated on three real data sets from diabetes, leukemia and Bardet–Bidel syndrome gene expression studies.  相似文献   

4.
The rank envelope test (Myllymäki et al. in J R Stat Soc B, doi: 10.1111/rssb.12172, 2016) is proposed as a solution to the multiple testing problem for Monte Carlo tests. Three different situations are recognized: (1) a few univariate Monte Carlo tests, (2) a Monte Carlo test with a function as the test statistic, (3) several Monte Carlo tests with functions as test statistics. The rank test has correct (global) type I error in each case and it is accompanied with a p-value and with a graphical interpretation which determines subtests and distances of the used test function(s) which lead to the rejection at the prescribed significance level of the test. Examples of null hypotheses from point process and random set statistics are used to demonstrate the strength of the rank envelope test. The examples include goodness-of-fit test with several test functions, goodness-of-fit test for a group of point patterns, test of dependence of components in a multi-type point pattern, and test of the Boolean assumption for random closed sets. A power comparison to the classical multiple testing procedures is given.  相似文献   

5.
Wage differences between women and men can be divided into an explained part and an unexplained part. The former encompasses differences in the observable characteristics of the members of groups, such as age, education or work experience. The latter includes the part of the difference that is not attributable to objective factors and represents an estimation of the discrimination level. We discuss the original method of Blinder (J Hum Resour 8(4):436–455, 1973) and Oaxaca (Int Econ Rev 14(3):693–709, 1973), the reweighting technique of DiNardo et al. (Econometrica 64(5):1001–1044, 1996) and our approach based on calibration. Using a Swiss dataset from 2012, we compare the estimated explained and unexplained parts of the difference in average wages in the private and public sectors obtained with the three methods. We show that for the private sector, all three methods yield similar results. For the public sector, the reweighting technique estimates a lower value of the unexplained part than the other two methods. The calibration approach and the reweighting technique allow us to estimate the explained and unexplained parts of the wage differences at points other than the mean. By using this, in this paper, the assumption that wages are more equitable in the public sector is analysed. Wage differences at different quantiles in both sectors are examined. We show that in the public sector, discrimination occurs quite uniformly both in lower and in higher-paying jobs. On the other hand, in the private sector, discrimination is greater in lower-paying jobs than in higher-paying jobs.queryPlease check and confirm the given name and family name is correctly identified for the first author and amend if necessary.  相似文献   

6.
Latent growth curve models as structural equation models are extensively discussed in various research fields (Curran and Muthén in Am. J. Community Psychol. 27:567–595, 1999; Duncan et al. in An introduction to latent variable growth curve modeling. Concepts, issues and applications, 2nd edn., Lawrence Earlbaum, Mahwah, 2006; Muthén and Muthén in Alcohol. Clin. Exp. Res. 24(6):882–891, 2000a; in J. Stud. Alcohol. 61:290–300, 2000b). Recent methodological and statistical extension are focused on the consideration of unobserved heterogeneity in empirical data. Muthén extended the classic structural equation approach by mixture components, i.e. categorical latent classes (Muthén in Marcouldies, G.A., Sckumacker, R.E. (eds.), New developments and techniques in structural equation modeling, pp. 1–33, Lawrance Erlbaum, Mahwah, 2001a; in Behaviometrika 29(1):81–117, 2002; in Kaplan, D. (ed.), The SAGE handbook of quantitative methodology for the social sciences, pp. 345–368, Sage, Thousand Oaks, 2004). The paper discusses applications of growth mixture models with data on delinquent behavior of adolescents from the German panel study Crime in the modern City (CrimoC) (Boers et al. in Eur. J. Criminol. 7:499–520, 2010; Reinecke in Delinquenzverläufe im Jugendalter: Empirische Überprüfung von Wachstums- und Mischverteilungsmodellen, Institut für sozialwissenschaftliche Forschung e.V., Münster, 2006a; in Methodology 2:100–112, 2006b; in van Montfort, K., Oud, J., Satorra, A. (eds.), Longitudinal models in the behavioral and related sciences, pp. 239–266, Lawrence Erlbaum, Mahwah, 2007). Observed as well as unobserved heterogeneity will be considered with growth mixture models. Special attention is given to the distribution of the outcome variables as counts. Poisson and negative binomial distributions with zero inflation are considered in the proposed growth mixture models variables. Different model specifications will be emphasized with respect to their particular parameterizations.  相似文献   

7.
The randomized response technique (RRT) is an important tool that is commonly used to protect a respondent’s privacy and avoid biased answers in surveys on sensitive issues. In this work, we consider the joint use of the unrelated-question RRT of Greenberg et al. (J Am Stat Assoc 64:520–539, 1969) and the related-question RRT of Warner (J Am Stat Assoc 60:63–69, 1965) dealing with the issue of an innocuous question from the unrelated-question RRT. Unlike the existing unrelated-question RRT of Greenberg et al. (1969), the approach can provide more information on the innocuous question by using the related-question RRT of Warner (1965) to effectively improve the efficiency of the maximum likelihood estimator of Scheers and Dayton (J Am Stat Assoc 83:969–974, 1988). We can then estimate the prevalence of the sensitive characteristic by using logistic regression. In this new design, we propose the transformation method and provide large-sample properties. From the case of two survey studies, an extramarital relationship study and a cable TV study, we develop the joint conditional likelihood method. As part of this research, we conduct a simulation study of the relative efficiencies of the proposed methods. Furthermore, we use the two survey studies to compare the analysis results under different scenarios.  相似文献   

8.
In recent research, Elliott et al. (1996) Elliott, G. 1996. Efficient tests for an autoregressive unit root. Econometrica, 64: 813836. [Crossref], [Web of Science ®] [Google Scholar] have shown the use of local-to-unity detrending via generalized least squares (GLS) to substantially increase the power of the Dickey–Fuller (1979) unit root test. In this paper the relationship between the extent of detrending undertaken, determined by the detrending parameter &art1;, and the power of the resulting GLS-based Dickey–Fuller (DF-GLS) test is examined. Using Monte Carlo simulation it is shown that the values of &art1; suggested by Elliott et al. (1996) Elliott, G. 1996. Efficient tests for an autoregressive unit root. Econometrica, 64: 813836. [Crossref], [Web of Science ®] [Google Scholar] on the basis of a limiting power function seldom maximize the power of the DF-GLS test for the finite samples encountered in applied research. This result is found to hold for the DF-GLS test including either an intercept or an intercept and a trend term. An empirical examination of the order of integration of the UK household savings ratio illustrates these findings, with the unit root hypothesis rejected using values of &art1; other than that proposed by Elliott et al. (1996) Elliott, G. 1996. Efficient tests for an autoregressive unit root. Econometrica, 64: 813836. [Crossref], [Web of Science ®] [Google Scholar].  相似文献   

9.
This paper discusses regression analysis of doubly censored failure time data when there may exist a cured subgroup. By doubly censored data, we mean that the failure time of interest denotes the elapsed time between two related events and the observations on both event times can suffer censoring (Sun in The statistical analysis of interval-censored failure time data. Springer, New York, 2006). One typical example of such data is given by an acquired immune deficiency syndrome cohort study. Although many methods have been developed for their analysis (De Gruttola and Lagakos in Biometrics 45:1–12, 1989; Sun et al. in Biometrics 55:909–914, 1999; 60:637–643, 2004; Pan in Biometrics 57:1245–1250, 2001), it does not seem to exist an established method for the situation with a cured subgroup. This paper discusses this later problem and presents a sieve approximation maximum likelihood approach. In addition, the asymptotic properties of the resulting estimators are established and an extensive simulation study indicates that the method seems to work well for practical situations. An application is also provided.  相似文献   

10.
Simulation-based inference for partially observed stochastic dynamic models is currently receiving much attention due to the fact that direct computation of the likelihood is not possible in many practical situations. Iterated filtering methodologies enable maximization of the likelihood function using simulation-based sequential Monte Carlo filters. Doucet et al. (2013) developed an approximation for the first and second derivatives of the log likelihood via simulation-based sequential Monte Carlo smoothing and proved that the approximation has some attractive theoretical properties. We investigated an iterated smoothing algorithm carrying out likelihood maximization using these derivative approximations. Further, we developed a new iterated smoothing algorithm, using a modification of these derivative estimates, for which we establish both theoretical results and effective practical performance. On benchmark computational challenges, this method beat the first-order iterated filtering algorithm. The method’s performance was comparable to a recently developed iterated filtering algorithm based on an iterated Bayes map. Our iterated smoothing algorithm and its theoretical justification provide new directions for future developments in simulation-based inference for latent variable models such as partially observed Markov process models.  相似文献   

11.
Using a wavelet basis, we establish in this paper upper bounds of wavelet estimation on \( L^{p}({\mathbb {R}}^{d}) \) risk of regression functions with strong mixing data for \( 1\le p<\infty \). In contrast to the independent case, these upper bounds have different analytic formulae for \(p\in [1, 2]\) and \(p\in (2, +\infty )\). For \(p=2\), it turns out that our result reduces to a theorem of Chaubey et al. (J Nonparametr Stat 25:53–71, 2013); and for \(d=1\) and \(p=2\), it becomes the corresponding theorem of Chaubey and Shirazi (Commun Stat Theory Methods 44:885–899, 2015).  相似文献   

12.
Recently, Abbasnejad et al. (2010 Abbasnejad, M., Arghami, N.R., Morgenthaler, S., Mohtashami Borzadaran, G.R. (2010). On the dynamic survival entropy. Stat. Probab. Lett. 80:19621971.[Crossref], [Web of Science ®] [Google Scholar]) proposed a measure of uncertainty based on survival function, called the survival entropy of order α. A dynamic form of the survival entropy of order α is also proposed by them. In this paper, we derive the weighted form of these measures. The properties of the new measures are also discussed.  相似文献   

13.
In this article, the concept of cumulative residual entropy (CRE) given by Rao et al. (2004 Rao, M., Chen, Y., Vemuri, B.C., Wang, F. (2004). Cumulative residual entropy: A new measure of information. IEEE Trans. Inf. Theory 50:12201228.[Crossref], [Web of Science ®] [Google Scholar]) is extended to Tsallis entropy function and dynamic version, both residual and past of it. We study some properties and characterization results for these generalized measures. In addition, we provide some characterization results of the first-order statistic based on the Tsallis survival entropy.  相似文献   

14.
The skew normal distribution of Azzalini (Scand J Stat 12:171–178, 1985) has been found suitable for unimodal density but with some skewness present. Through this article, we introduce a flexible extension of the Azzalini (Scand J Stat 12:171–178, 1985) skew normal distribution based on a symmetric component normal distribution (Gui et al. in J Stat Theory Appl 12(1):55–66, 2013). The proposed model can efficiently capture the bimodality, skewness and kurtosis criteria and heavy-tail property. The paper presents various basic properties of this family of distributions and provides two stochastic representations which are useful for obtaining theoretical properties and to simulate from the distribution. Further, maximum likelihood estimation of the parameters is studied numerically by simulation and the distribution is investigated by carrying out comparative fitting of three real datasets.  相似文献   

15.
This paper presents a new variable weight method, called the singular value decomposition (SVD) approach, for Kohonen competitive learning (KCL) algorithms based on the concept of Varshavsky et al. [18 R. Varshavsky, A. Gottlieb, M. Linial, and D. Horn, Novel unsupervised feature filtering of bilogical data, Bioinformatics 22 (2006), pp. 507513.[Crossref], [PubMed], [Web of Science ®] [Google Scholar]]. Integrating the weighted fuzzy c-means (FCM) algorithm with KCL, in this paper, we propose a weighted fuzzy KCL (WFKCL) algorithm. The goal of the proposed WFKCL algorithm is to reduce the clustering error rate when data contain some noise variables. Compared with the k-means, FCM and KCL with existing variable-weight methods, the proposed WFKCL algorithm with the proposed SVD's weight method provides a better clustering performance based on the error rate criterion. Furthermore, the complexity of the proposed SVD's approach is less than Pal et al. [17 S.K. Pal, R.K. De, and J. Basak, Unsupervised feature evaluation: a neuro-fuzzy approach, IEEE. Trans. Neural Netw. 11 (2000), pp. 366376.[Crossref], [PubMed], [Web of Science ®] [Google Scholar]], Wang et al. [19 X.Z. Wang, Y.D. Wang, and L.J. Wang, Improving fuzzy c-means clustering based on feature-weight learning, Pattern Recognit. Lett. 25 (2004), pp. 11231132.[Crossref], [Web of Science ®] [Google Scholar]] and Hung et al. [9 W. -L. Hung, M. -S. Yang, and D. -H. Chen, Bootstrapping approach to feature-weight selection in fuzzy c-means algorithms with an application in color image segmentation, Pattern Recognit. Lett. 29 (2008), pp. 13171325.[Crossref], [Web of Science ®] [Google Scholar]].  相似文献   

16.
Abstract

The present study is an attempt to equip a strategy with a cost-effective computational approach when non response is present under two occasion sampling. We have applied our computational cost strategy over Choudhary et al. (2004 Choudhary, R. K., H. V. L. Bathla, and U. C. Sud. 2004. On non-response in sampling on two occasions. Journal of the Indian Society of Agricultural Statistics 58(3):33143. [Google Scholar])’s non response setup for fixed precision and evaluated cost. In addition, we have also computed variance for some fixed cost. We have discussed the aforementioned procedure for three cases as when there is non response present on both occasions, first occasion and second occasion. A numerical illustration is demonstrated for validation of improved cost methodology where we also work out with optimum unmatched or matched fraction while Choudhary et al. (2004 Choudhary, R. K., H. V. L. Bathla, and U. C. Sud. 2004. On non-response in sampling on two occasions. Journal of the Indian Society of Agricultural Statistics 58(3):33143. [Google Scholar]) do not provide the direct optimal result.  相似文献   

17.
Since the seminal paper by Cook and Weisberg [9 R.D. Cook and S. Weisberg, Residuals and Influence in Regression, Chapman &; Hall, London, 1982. [Google Scholar]], local influence, next to case deletion, has gained popularity as a tool to detect influential subjects and measurements for a variety of statistical models. For the linear mixed model the approach leads to easily interpretable and computationally convenient expressions, not only highlighting influential subjects, but also which aspect of their profile leads to undue influence on the model's fit [17 E. Lesaffre and G. Verbeke, Local influence in linear mixed models, Biometrics 54 (1998), pp. 570582. doi: 10.2307/3109764[Crossref], [PubMed], [Web of Science ®] [Google Scholar]]. Ouwens et al. [24 M.J.N.M. Ouwens, F.E.S. Tan, and M.P.F. Berger, Local influence to detect influential data structures for generalized linear mixed models, Biometrics 57 (2001), pp. 11661172. doi: 10.1111/j.0006-341X.2001.01166.x[Crossref], [PubMed], [Web of Science ®] [Google Scholar]] applied the method to the Poisson-normal generalized linear mixed model (GLMM). Given the model's nonlinear structure, these authors did not derive interpretable components but rather focused on a graphical depiction of influence. In this paper, we consider GLMMs for binary, count, and time-to-event data, with the additional feature of accommodating overdispersion whenever necessary. For each situation, three approaches are considered, based on: (1) purely numerical derivations; (2) using a closed-form expression of the marginal likelihood function; and (3) using an integral representation of this likelihood. Unlike when case deletion is used, this leads to interpretable components, allowing not only to identify influential subjects, but also to study the cause thereof. The methodology is illustrated in case studies that range over the three data types mentioned.  相似文献   

18.
The aim of this letter to acknowledge of priority on calibration estimation. There are numerous studies on calibration estimation in literature. The studies on calibration estimation are reviewed and it is found out that an existing calibration estimator is reprocessed in the recent paper published by Nidhi et al. (2007 Nidhi, B. V. S. Sisodia, Subedar Singh, and Sanjay K. Singh. 2017. Calibration approach estimation of the mean in stratified sampling and stratified double sampling. Commun.Statist.Theor.Meth. 46 (10):49324942.[Taylor &; Francis Online], [Web of Science ®] [Google Scholar]).  相似文献   

19.
In this article, we develop the skew-generalized normal distribution introduced by Arellano-Valle et al. (2004 Arellano-Valle, R.B., Gomez, H.W., Quintana, F.A. (2004). A new class of skew-normal distribution. Commun. Stat. - Theory Methods. 33(7):14651480.[Taylor &; Francis Online], [Web of Science ®] [Google Scholar]) to a new family of the Beta skew-generalized normal (BSGN) distribution . Here, we present some theorems and properties of BSGN distribution and obtain its moment-generating function.  相似文献   

20.
Some extensions of Shannon entropy to the survival function have been recently proposed. Misagh et al. (2011 Misagh, F., Panahi, Y., Yari, G.H., Shahi, R. (2011, September). Weighted cumulative entropy and its estimation. In: Quality and Reliability (ICQR), 2011, IEEE International conference (pp. 477480), IEEE.[Crossref] [Google Scholar]) introduced weighted cumulative residual entropy (WCRE) that was studied more by Mirali et al. (2015 Mirali, M., Baratpour, S., Fakoor, V. (2015). On weighted cumulative residual entropy. Commun. Stat. Theory Methods. doi:10.1080103610926.2015.1053932.[Web of Science ®] [Google Scholar]). In this article, the dynamic version of WCRE is proposed. Some relationships of this measure with well-known reliability measures and ageing classes are studied and some characterization results for exponential and Rayleigh distributions are provided. Also, a non parametric estimation of dynamic version of WCRE is introduced and its asymptotic behavior is investigated.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号