首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Grubbs’s model (Grubbs, Encycl Stat Sci 3:42–549, 1983) is used for comparing several measuring devices, and it is common to assume that the random terms have a normal (or symmetric) distribution. In this paper, we discuss the extension of this model to the class of scale mixtures of skew-normal distributions. Our results provide a useful generalization of the symmetric Grubbs’s model (Osorio et al., Comput Stat Data Anal, 53:1249–1263, 2009) and the asymmetric skew-normal model (Montenegro et al., Stat Pap 51:701–715, 2010). We discuss the EM algorithm for parameter estimation and the local influence method (Cook, J Royal Stat Soc Ser B, 48:133–169, 1986) for assessing the robustness of these parameter estimates under some usual perturbation schemes. The results and methods developed in this paper are illustrated with a numerical example.  相似文献   

2.
In this article, one- and two-sample Bayesian prediction intervals based on progressively Type-II censored data are derived. For the illustration of the developed results, the exponential, Pareto, Weibull and Burr Type-XII models are used as examples. Some of the previous results in the literature such as Dunsmore (Technometrics 16:455–460, 1974), Nigm and Hamdy (Commun Stat Theory Methods 16:1761–1772, 1987), Nigm (Commun Stat Theory Methods 18:897–911, 1989), Al-Hussaini and Jaheen (Commun Stat Theory Methods 24:1829–1842, 1995), Al-Hussaini (J Stat Plan Inference 79:79–91, 1999), Ali Mousa (J Stat Comput Simul 71: 163–181, 2001) and Ali Mousa and Jaheen (Stat Pap 43:587–593, 2002) can be achieved as special cases of our results. Finally, some numerical computations are presented for illustrating all the proposed inferential procedures.  相似文献   

3.
Approximate Bayesian Computational (ABC) methods, or likelihood-free methods, have appeared in the past fifteen years as useful methods to perform Bayesian analysis when the likelihood is analytically or computationally intractable. Several ABC methods have been proposed: MCMC methods have been developed by Marjoram et al. (2003) and by Bortot et al. (2007) for instance, and sequential methods have been proposed among others by Sisson et al. (2007), Beaumont et al. (2009) and Del Moral et al. (2012). Recently, sequential ABC methods have appeared as an alternative to ABC-PMC methods (see for instance McKinley et al., 2009; Sisson et al., 2007). In this paper a new algorithm combining population-based MCMC methods with ABC requirements is proposed, using an analogy with the parallel tempering algorithm (Geyer 1991). Performance is compared with existing ABC algorithms on simulations and on a real example.  相似文献   

4.
The unique copula of a continuous random pair \((X,Y)\) is said to be radially symmetric if and only if it is also the copula of the pair \((-X,-Y)\) . This paper revisits the recently considered issue of testing for radial symmetry. Three rank-based statistics are proposed to this end which are asymptotically equivalent but simpler to compute than those of Bouzebda and Cherfi (J Stat Plan Inference 142:1262–1271, 2012). Their limiting null distribution and its approximation using the multiplier bootstrap are discussed. The finite-sample properties of the resulting tests are assessed via simulations. The asymptotic distribution of one of the test statistics is also computed under an arbitrary alternative, thereby correcting an error in the recent work of Dehgani et al. (Stat Pap 54:271–286, 2013).  相似文献   

5.
In this article we have envisaged an efficient generalized class of estimators for finite population variance of the study variable in simple random sampling using information on an auxiliary variable. Asymptotic expressions of the bias and mean square error of the proposed class of estimators have been obtained. Asymptotic optimum estimator in the proposed class of estimators has been identified with its mean square error formula. We have shown that the proposed class of estimators is more efficient than the usual unbiased, difference, Das and Tripathi (Sankhya C 40:139–148, 1978), Isaki (J. Am. Stat. Assoc. 78:117–123, 1983), Singh et al. (Curr. Sci. 57:1331–1334, 1988), Upadhyaya and Singh (Vikram Math. J. 19:14–17, 1999b), Kadilar and Cingi (Appl. Math. Comput. 173:2, 1047–1059, 2006a) and other estimators/classes of estimators. In the support of the theoretically results we have given an empirical study.  相似文献   

6.
There are few readily-implemented tests for goodness-of-fit for the Cox proportional hazards model with time-varying covariates. Through simulations, we assess the power of tests by Cox (J R Stat Soc B (Methodol) 34(2):187–220, 1972), Grambsch and Therneau (Biometrika 81(3):515–526, 1994), and Lin et al. (Biometrics 62:803–812, 2006). Results show that power is highly variable depending on the time to violation of proportional hazards, the magnitude of the change in hazard ratio, and the direction of the change. Because these characteristics are unknown outside of simulation studies, none of the tests examined is expected to have high power in real applications. While all of these tests are theoretically interesting, they appear to be of limited practical value.  相似文献   

7.
A new discrete distribution depending on two parameters $\alpha >-1$ and $\sigma >0$ is obtained by discretizing the generalized normal distribution proposed in García et al. (Comput Stat and Data Anal 54:2021–2034, 2010), which was derived from the normal distribution by using the Marshall and Olkin (Biometrika 84(3):641–652, 1997) scheme. The particular case $\alpha =1$ leads us to the discrete half-normal distribution which is different from the discrete half-normal distribution proposed previously in the statistical literature. This distribution is unimodal, overdispersed (the responses show a mean sample greater than the variance) and with an increasing failure rate. We revise its properties and the question of parameter estimation. Expected frequencies were calculated for two overdispersed and underdispersed (the responses show a variance greater than the mean) examples, and the distribution was found to provide a very satisfactory fit.  相似文献   

8.
In this paper, we derive elementary M- and optimally robust asymptotic linear (AL)-estimates for the parameters of an Ornstein–Uhlenbeck process. Simulation and estimation of the process are already well-studied, see Iacus (Simulation and inference for stochastic differential equations. Springer, New York, 2008). However, in order to protect against outliers and deviations from the ideal law the formulation of suitable neighborhood models and a corresponding robustification of the estimators are necessary. As a measure of robustness, we consider the maximum asymptotic mean square error (maxasyMSE), which is determined by the influence curve (IC) of AL estimates. The IC represents the standardized influence of an individual observation on the estimator given the past. In a first step, we extend the method of M-estimation from Huber (Robust statistics. Wiley, New York, 1981). In a second step, we apply the general theory based on local asymptotic normality, AL estimates, and shrinking neighborhoods due to Kohl et?al. (Stat Methods Appl 19:333–354, 2010), Rieder (Robust asymptotic statistics. Springer, New York, 1994), Rieder (2003), and Staab (1984). This leads to optimally robust ICs whose graph exhibits surprising behavior. In the end, we discuss the estimator construction, i.e. the problem of constructing an estimator from the family of optimal ICs. Therefore we carry out in our context the One-Step construction dating back to LeCam (Asymptotic methods in statistical decision theory. Springer, New York, 1969) and compare it by means of simulations with MLE and M-estimator.  相似文献   

9.
In this work we prove that for an exchangeable multivariate normal distribution the joint distribution of a linear combination of order statistics and a linear combination of their concomitants together with an auxiliary variable is skew normal. We also investigate some special cases, thus extending the results of Olkin and Viana (J Am Stat Assoc 90:1373–1379, 1995), Loperfido (Test 17:370–380, 2008a) and Sheikhi and Jamalizadeh (Paper 52:885–892, 2011).  相似文献   

10.
Four testing procedures are considered for testing the response rate of one sample correlated binary data with a cluster size of one or two, which often occurs in otolaryngologic and ophthalmologic studies. Although an asymptotic approach is often used for statistical inference, it is criticized for unsatisfactory type I error control in small sample settings. An alternative to the asymptotic approach is an unconditional approach. The first unconditional approach is the one based on estimation, also known as parametric bootstrap (Lee and Young in Stat Probab Lett 71(2):143–153, 2005). The other two unconditional approaches considered in this article are an approach based on maximization (Basu in J Am Stat Assoc 72(358):355–366, 1977), and an approach based on estimation and maximization (Lloyd in Biometrics 64(3):716–723, 2008a). These two unconditional approaches guarantee the test size and are generally more reliable than the asymptotic approach. We compare these four approaches in conjunction with a test proposed by Lee and Dubin (Stat Med 13(12):1241–1252, 1994) and a likelihood ratio test derived in this article, in regards to type I error rate and power for sample sizes from small to medium. An example from an otolaryngologic study is provided to illustrate the various testing procedures. The unconditional approach based on estimation and maximization using the test in Lee and Dubin (Stat Med 13(12):1241–1252, 1994) is preferable due to the power advantageous.  相似文献   

11.
A critical issue in modeling binary response data is the choice of the links. We introduce a new link based on the Student’s t-distribution (t-link) for correlated binary data. The t-link relates to the common probit-normal link adding one additional parameter which controls the heaviness of the tails of the link. We propose an interesting EM algorithm for computing the maximum likelihood for generalized linear mixed t-link models for correlated binary data. In contrast with recent developments (Tan et al. in J. Stat. Comput. Simul. 77:929–943, 2007; Meza et al. in Comput. Stat. Data Anal. 53:1350–1360, 2009), this algorithm uses closed-form expressions at the E-step, as opposed to Monte Carlo simulation. Our proposed algorithm relies on available formulas for the mean and variance of a truncated multivariate t-distribution. To illustrate the new method, a real data set on respiratory infection in children and a simulation study are presented.  相似文献   

12.
The cross-ratio is an important local measure that characterizes the dependence between bivariate failure times. To estimate the cross-ratio in follow-up studies where delayed entry is present, estimation procedures need to account for left truncation. Ignoring left truncation yields biased estimates of the cross-ratio. We extend the method of Hu et al., Biometrika 98:341–354 (2011) by modifying the risk sets and relevant indicators to handle left-truncated bivariate failure times, which yields the cross-ratio estimate with desirable asymptotic properties that can be shown by the same techniques used in Hu et al., Biometrika 98:341–354 (2011). Numerical studies are conducted.  相似文献   

13.
We deal with sampling by variables with two-way protection in the case of a $N\>(\mu ,\sigma ^2)$ distributed characteristic with unknown $\sigma $ . The LR sampling plan proposed by Lieberman and Resnikoff (JASA 50: 457 ${-}$ 516, 1955) and the BSK sampling plan proposed by Bruhn-Suhr and Krumbholz (Stat. Papers 31: 195–207, 1990) are based on the UMVU and the plug-in estimator, respectively. For given $p_1$ (AQL), $p_2$ (RQL) and $\alpha ,\beta $ (type I and II errors) we present an algorithm allowing to determine the optimal LR and BSK plans having minimal sample size among all plans satisfying the corresponding two-point condition on the OC. An R (R: A language and environment for statistical computing, R Foundation for Statistical Computing, Vienna, Austria. ISBN 3-900051-07-0, URL http://www.R-project.org/ 2012) package, ExLiebeRes‘ (Krumbholz and Steuer ExLiebeRes: calculating exact LR- and BSK-plans, R-package version 0.9.9. http://exlieberes.r-forge.r-project.org 2012) implementing that algorithm is provided to the public.  相似文献   

14.
In this paper, we discuss the extension of some diagnostic procedures to multivariate measurement error models with scale mixtures of skew-normal distributions (Lachos et?al., Statistics 44:541?C556, 2010c). This class provides a useful generalization of normal (and skew-normal) measurement error models since the random term distributions cover symmetric, asymmetric and heavy-tailed distributions, such as skew-t, skew-slash and skew-contaminated normal, among others. Inspired by the EM algorithm proposed by Lachos et?al. (Statistics 44:541?C556, 2010c), we develop a local influence analysis for measurement error models, following Zhu and Lee??s (J R Stat Soc B 63:111?C126, 2001) approach. This is because the observed data log-likelihood function associated with the proposed model is somewhat complex and Cook??s well-known approach can be very difficult to apply to achieve local influence measures. Some useful perturbation schemes are also discussed. In addition, a score test for assessing the homogeneity of the skewness parameter vector is presented. Finally, the methodology is exemplified through a real data set, illustrating the usefulness of the proposed methodology.  相似文献   

15.
Approximate Bayesian computation (ABC) is a popular approach to address inference problems where the likelihood function is intractable, or expensive to calculate. To improve over Markov chain Monte Carlo (MCMC) implementations of ABC, the use of sequential Monte Carlo (SMC) methods has recently been suggested. Most effective SMC algorithms that are currently available for ABC have a computational complexity that is quadratic in the number of Monte Carlo samples (Beaumont et al., Biometrika 86:983?C990, 2009; Peters et al., Technical report, 2008; Toni et al., J.?Roy. Soc. Interface 6:187?C202, 2009) and require the careful choice of simulation parameters. In this article an adaptive SMC algorithm is proposed which admits a computational complexity that is linear in the number of samples and adaptively determines the simulation parameters. We demonstrate our algorithm on a toy example and on a birth-death-mutation model arising in epidemiology.  相似文献   

16.
In this paper, maximum likelihood and Bayesian approaches have been used to obtain the estimation of \(P(X<Y)\) based on a set of upper record values from Kumaraswamy distribution. The existence and uniqueness of the maximum likelihood estimates of the Kumaraswamy distribution parameters are obtained. Confidence intervals, exact and approximate, as well as Bayesian credible intervals are constructed. Bayes estimators have been developed under symmetric (squared error) and asymmetric (LINEX) loss functions using the conjugate and non informative prior distributions. The approximation forms of Lindley (Trabajos de Estadistica 3:281–288, 1980) and Tierney and Kadane (J Am Stat Assoc 81:82–86, 1986) are used for the Bayesian cases. Monte Carlo simulations are performed to compare the different proposed methods.  相似文献   

17.
We consider Bayesian parameter inference associated to partially-observed stochastic processes that start from a set B 0 and are stopped or killed at the first hitting time of a known set A. Such processes occur naturally within the context of a wide variety of applications. The associated posterior distributions are highly complex and posterior parameter inference requires the use of advanced Markov chain Monte Carlo (MCMC) techniques. Our approach uses a recently introduced simulation methodology, particle Markov chain Monte Carlo (PMCMC) (Andrieu et al. 2010), where sequential Monte Carlo (SMC) (Doucet et al. 2001; Liu 2001) approximations are embedded within MCMC. However, when the parameter of interest is fixed, standard SMC algorithms are not always appropriate for many stopped processes. In Chen et al. (2005), Del Moral (2004), the authors introduce SMC approximations of multi-level Feynman-Kac formulae, which can lead to more efficient algorithms. This is achieved by devising a sequence of sets from B 0 to A and then performing the resampling step only when the samples of the process reach intermediate sets in the sequence. The choice of the intermediate sets is critical to the performance of such a scheme. In this paper, we demonstrate that multi-level SMC algorithms can be used as a proposal in PMCMC. In addition, we introduce a flexible strategy that adapts the sets for different parameter proposals. Our methodology is illustrated on the coalescent model with migration.  相似文献   

18.
Denecke and Müller (CSDA 55:2724–2738, 2011) presented an estimator for the correlation coefficient based on likelihood depth for Gaussian copula and Denecke and Müller (J Stat Planning Inference 142: 2501–2517, 2012) proved a theorem about the consistency of general estimators based on data depth using uniform convergence of the depth measure. In this article, the uniform convergence of the depth measure for correlation is shown so that consistency of the correlation estimator based on depth can be concluded. The uniform convergence is shown with the help of the extension of the Glivenko-Cantelli Lemma by Vapnik- C? ervonenkis classes.  相似文献   

19.
Among the many tools suited to detect local clusters in group-level data, Kulldorff–Nagarwalla’s spatial scan statistic gained wide popularity (Kulldorff and Nagarwalla in Stat Med 14(8):799–810, 1995). The underlying assumptions needed for making statistical inference feasible are quite strong, as counts in spatial units are assumed to be independent Poisson distributed random variables. Unfortunately, outcomes in spatial units are often not independent of each other, and risk estimates of areas that are close to each other will tend to be positively correlated as they share a number of spatially varying characteristics. We therefore introduce a Bayesian model-based algorithm for cluster detection in the presence of spatially autocorrelated relative risks. Our approach has been made possible by the recent development of new numerical methods based on integrated nested Laplace approximation, by which we can directly compute very accurate approximations of posterior marginals within short computational time (Rue et al. in JRSS B 71(2):319–392, 2009). Simulated data and a case study show that the performance of our method is at least comparable to that of Kulldorff–Nagarwalla’s statistic.  相似文献   

20.
We study the properties of the called log-beta Weibull distribution defined by the logarithm of the beta Weibull random variable (Famoye et al. in J Stat Theory Appl 4:121–136, 2005; Lee et al. in J Mod Appl Stat Methods 6:173–186, 2007). An advantage of the new distribution is that it includes as special sub-models classical distributions reported in the lifetime literature. We obtain formal expressions for the moments, moment generating function, quantile function and mean deviations. We construct a regression model based on the new distribution to predict recurrence of prostate cancer for patients with clinically localized prostate cancer treated by open radical prostatectomy. It can be applied to censored data since it represents a parametric family of models that includes as special sub-models several widely-known regression models. The regression model was fitted to a data set of 1,324 eligible prostate cancer patients. We can predict recurrence free probability after the radical prostatectomy in terms of highly significant clinical and pathological explanatory variables associated with the recurrence of the disease. The predicted probabilities of remaining free of cancer progression are calculated under two nested models.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号