首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 451 毫秒
1.
The cross-ratio is an important local measure that characterizes the dependence between bivariate failure times. To estimate the cross-ratio in follow-up studies where delayed entry is present, estimation procedures need to account for left truncation. Ignoring left truncation yields biased estimates of the cross-ratio. We extend the method of Hu et al., Biometrika 98:341–354 (2011) by modifying the risk sets and relevant indicators to handle left-truncated bivariate failure times, which yields the cross-ratio estimate with desirable asymptotic properties that can be shown by the same techniques used in Hu et al., Biometrika 98:341–354 (2011). Numerical studies are conducted.  相似文献   

2.
Approximate Bayesian computation (ABC) is a popular approach to address inference problems where the likelihood function is intractable, or expensive to calculate. To improve over Markov chain Monte Carlo (MCMC) implementations of ABC, the use of sequential Monte Carlo (SMC) methods has recently been suggested. Most effective SMC algorithms that are currently available for ABC have a computational complexity that is quadratic in the number of Monte Carlo samples (Beaumont et al., Biometrika 86:983?C990, 2009; Peters et al., Technical report, 2008; Toni et al., J.?Roy. Soc. Interface 6:187?C202, 2009) and require the careful choice of simulation parameters. In this article an adaptive SMC algorithm is proposed which admits a computational complexity that is linear in the number of samples and adaptively determines the simulation parameters. We demonstrate our algorithm on a toy example and on a birth-death-mutation model arising in epidemiology.  相似文献   

3.
Approximate Bayesian Computational (ABC) methods, or likelihood-free methods, have appeared in the past fifteen years as useful methods to perform Bayesian analysis when the likelihood is analytically or computationally intractable. Several ABC methods have been proposed: MCMC methods have been developed by Marjoram et al. (2003) and by Bortot et al. (2007) for instance, and sequential methods have been proposed among others by Sisson et al. (2007), Beaumont et al. (2009) and Del Moral et al. (2012). Recently, sequential ABC methods have appeared as an alternative to ABC-PMC methods (see for instance McKinley et al., 2009; Sisson et al., 2007). In this paper a new algorithm combining population-based MCMC methods with ABC requirements is proposed, using an analogy with the parallel tempering algorithm (Geyer 1991). Performance is compared with existing ABC algorithms on simulations and on a real example.  相似文献   

4.
There are few readily-implemented tests for goodness-of-fit for the Cox proportional hazards model with time-varying covariates. Through simulations, we assess the power of tests by Cox (J R Stat Soc B (Methodol) 34(2):187–220, 1972), Grambsch and Therneau (Biometrika 81(3):515–526, 1994), and Lin et al. (Biometrics 62:803–812, 2006). Results show that power is highly variable depending on the time to violation of proportional hazards, the magnitude of the change in hazard ratio, and the direction of the change. Because these characteristics are unknown outside of simulation studies, none of the tests examined is expected to have high power in real applications. While all of these tests are theoretically interesting, they appear to be of limited practical value.  相似文献   

5.
Online (also ‘real-time’ or ‘sequential’) signal extraction from noisy and outlier-interfered data streams is a basic but challenging goal. Fitting a robust Repeated Median (Siegel in Biometrika 69:242–244, 1982) regression line in a moving time window has turned out to be a promising approach (Davies et al. in J. Stat. Plan. Inference 122:65–78, 2004; Gather et al. in Comput. Stat. 21:33–51, 2006; Schettlinger et al. in Biomed. Eng. 51:49–56, 2006). The level of the regression line at the rightmost window position, which equates to the current time point in an online application, is then used for signal extraction. However, the choice of the window width has a large impact on the signal extraction, and it is impossible to predetermine an optimal fixed window width for data streams which exhibit signal changes like level shifts and sudden trend changes. We therefore propose a robust test procedure for the online detection of such signal changes. An algorithm including the test allows for online window width adaption, meaning that the window width is chosen w.r.t. the current data situation at each time point. Comparison studies show that our new procedure outperforms an existing Repeated Median filter with automatic window width selection (Schettlinger et al. in Int. J. Adapt. Control Signal Process. 24:346–362, 2010).  相似文献   

6.
In this work we prove that for an exchangeable multivariate normal distribution the joint distribution of a linear combination of order statistics and a linear combination of their concomitants together with an auxiliary variable is skew normal. We also investigate some special cases, thus extending the results of Olkin and Viana (J Am Stat Assoc 90:1373–1379, 1995), Loperfido (Test 17:370–380, 2008a) and Sheikhi and Jamalizadeh (Paper 52:885–892, 2011).  相似文献   

7.
We deal with sampling by variables with two-way protection in the case of a $N\>(\mu ,\sigma ^2)$ distributed characteristic with unknown $\sigma $ . The LR sampling plan proposed by Lieberman and Resnikoff (JASA 50: 457 ${-}$ 516, 1955) and the BSK sampling plan proposed by Bruhn-Suhr and Krumbholz (Stat. Papers 31: 195–207, 1990) are based on the UMVU and the plug-in estimator, respectively. For given $p_1$ (AQL), $p_2$ (RQL) and $\alpha ,\beta $ (type I and II errors) we present an algorithm allowing to determine the optimal LR and BSK plans having minimal sample size among all plans satisfying the corresponding two-point condition on the OC. An R (R: A language and environment for statistical computing, R Foundation for Statistical Computing, Vienna, Austria. ISBN 3-900051-07-0, URL http://www.R-project.org/ 2012) package, ExLiebeRes‘ (Krumbholz and Steuer ExLiebeRes: calculating exact LR- and BSK-plans, R-package version 0.9.9. http://exlieberes.r-forge.r-project.org 2012) implementing that algorithm is provided to the public.  相似文献   

8.
A new discrete distribution depending on two parameters $\alpha >-1$ and $\sigma >0$ is obtained by discretizing the generalized normal distribution proposed in García et al. (Comput Stat and Data Anal 54:2021–2034, 2010), which was derived from the normal distribution by using the Marshall and Olkin (Biometrika 84(3):641–652, 1997) scheme. The particular case $\alpha =1$ leads us to the discrete half-normal distribution which is different from the discrete half-normal distribution proposed previously in the statistical literature. This distribution is unimodal, overdispersed (the responses show a mean sample greater than the variance) and with an increasing failure rate. We revise its properties and the question of parameter estimation. Expected frequencies were calculated for two overdispersed and underdispersed (the responses show a variance greater than the mean) examples, and the distribution was found to provide a very satisfactory fit.  相似文献   

9.
A new methodology for model determination in decomposable graphical Gaussian models (Dawid and Lauritzen in Ann. Stat. 21(3), 1272?C1317, 1993) is developed. The Bayesian paradigm is used and, for each given graph, a hyper-inverse Wishart prior distribution on the covariance matrix is considered. This prior distribution depends on hyper-parameters. It is well-known that the models??s posterior distribution is sensitive to the specification of these hyper-parameters and no completely satisfactory method is registered. In order to avoid this problem, we suggest adopting an empirical Bayes strategy, that is a strategy for which the values of the hyper-parameters are determined using the data. Typically, the hyper-parameters are fixed to their maximum likelihood estimations. In order to calculate these maximum likelihood estimations, we suggest a Markov chain Monte Carlo version of the Stochastic Approximation EM algorithm. Moreover, we introduce a new sampling scheme in the space of graphs that improves the add and delete proposal of Armstrong et al. (Stat. Comput. 19(3), 303?C316, 2009). We illustrate the efficiency of this new scheme on simulated and real datasets.  相似文献   

10.
In this article, one- and two-sample Bayesian prediction intervals based on progressively Type-II censored data are derived. For the illustration of the developed results, the exponential, Pareto, Weibull and Burr Type-XII models are used as examples. Some of the previous results in the literature such as Dunsmore (Technometrics 16:455–460, 1974), Nigm and Hamdy (Commun Stat Theory Methods 16:1761–1772, 1987), Nigm (Commun Stat Theory Methods 18:897–911, 1989), Al-Hussaini and Jaheen (Commun Stat Theory Methods 24:1829–1842, 1995), Al-Hussaini (J Stat Plan Inference 79:79–91, 1999), Ali Mousa (J Stat Comput Simul 71: 163–181, 2001) and Ali Mousa and Jaheen (Stat Pap 43:587–593, 2002) can be achieved as special cases of our results. Finally, some numerical computations are presented for illustrating all the proposed inferential procedures.  相似文献   

11.
We study some mathematical properties of the Marshall–Olkin extended Weibull distribution introduced by Marshall and Olkin (Biometrika 84:641–652, 1997). We provide explicit expressions for the moments, generating and quantile functions, mean deviations, Bonferroni and Lorenz curves, reliability and Rényi entropy. We determine the moments of the order statistics. We also discuss the estimation of the model parameters by maximum likelihood and obtain the observed information matrix. We provide an application to real data which illustrates the usefulness of the model.  相似文献   

12.
Grubbs’s model (Grubbs, Encycl Stat Sci 3:42–549, 1983) is used for comparing several measuring devices, and it is common to assume that the random terms have a normal (or symmetric) distribution. In this paper, we discuss the extension of this model to the class of scale mixtures of skew-normal distributions. Our results provide a useful generalization of the symmetric Grubbs’s model (Osorio et al., Comput Stat Data Anal, 53:1249–1263, 2009) and the asymmetric skew-normal model (Montenegro et al., Stat Pap 51:701–715, 2010). We discuss the EM algorithm for parameter estimation and the local influence method (Cook, J Royal Stat Soc Ser B, 48:133–169, 1986) for assessing the robustness of these parameter estimates under some usual perturbation schemes. The results and methods developed in this paper are illustrated with a numerical example.  相似文献   

13.
Parametric and permutation testing for multivariate monotonic alternatives   总被引:1,自引:0,他引:1  
We are firstly interested in testing the homogeneity of k mean vectors against two-sided restricted alternatives separately in multivariate normal distributions. This problem is a multivariate extension of Bartholomew (in Biometrica 46:328–335, 1959b) and an extension of Sasabuchi et al. (in Biometrica 70:465–472, 1983) and Kulatunga and Sasabuchi (in Mem. Fac. Sci., Kyushu Univ. Ser. A: Mathematica 38:151–161, 1984) to two-sided ordered hypotheses. We examine the problem of testing under two separate cases. One case is that covariance matrices are known, the other one is that covariance matrices are unknown but common. For the general case that covariance matrices are known the test statistic is obtained using the likelihood ratio method. When the known covariance matrices are common and diagonal, the null distribution of test statistic is derived and its critical values are computed at different significance levels. A Monte Carlo study is also presented to estimate the power of the test. A test statistic is proposed for the case when the common covariance matrices are unknown. Since it is difficult to compute the exact p-value for this problem of testing with the classical method when the covariance matrices are completely unknown, we first present a reformulation of the test statistic based on the orthogonal projections on the closed convex cones and then determine the upper bounds for its p-values. Also we provide a general nonparametric solution based on the permutation approach and nonparametric combination of dependent tests.  相似文献   

14.
In this article we have envisaged an efficient generalized class of estimators for finite population variance of the study variable in simple random sampling using information on an auxiliary variable. Asymptotic expressions of the bias and mean square error of the proposed class of estimators have been obtained. Asymptotic optimum estimator in the proposed class of estimators has been identified with its mean square error formula. We have shown that the proposed class of estimators is more efficient than the usual unbiased, difference, Das and Tripathi (Sankhya C 40:139–148, 1978), Isaki (J. Am. Stat. Assoc. 78:117–123, 1983), Singh et al. (Curr. Sci. 57:1331–1334, 1988), Upadhyaya and Singh (Vikram Math. J. 19:14–17, 1999b), Kadilar and Cingi (Appl. Math. Comput. 173:2, 1047–1059, 2006a) and other estimators/classes of estimators. In the support of the theoretically results we have given an empirical study.  相似文献   

15.
In this paper, we derive elementary M- and optimally robust asymptotic linear (AL)-estimates for the parameters of an Ornstein–Uhlenbeck process. Simulation and estimation of the process are already well-studied, see Iacus (Simulation and inference for stochastic differential equations. Springer, New York, 2008). However, in order to protect against outliers and deviations from the ideal law the formulation of suitable neighborhood models and a corresponding robustification of the estimators are necessary. As a measure of robustness, we consider the maximum asymptotic mean square error (maxasyMSE), which is determined by the influence curve (IC) of AL estimates. The IC represents the standardized influence of an individual observation on the estimator given the past. In a first step, we extend the method of M-estimation from Huber (Robust statistics. Wiley, New York, 1981). In a second step, we apply the general theory based on local asymptotic normality, AL estimates, and shrinking neighborhoods due to Kohl et?al. (Stat Methods Appl 19:333–354, 2010), Rieder (Robust asymptotic statistics. Springer, New York, 1994), Rieder (2003), and Staab (1984). This leads to optimally robust ICs whose graph exhibits surprising behavior. In the end, we discuss the estimator construction, i.e. the problem of constructing an estimator from the family of optimal ICs. Therefore we carry out in our context the One-Step construction dating back to LeCam (Asymptotic methods in statistical decision theory. Springer, New York, 1969) and compare it by means of simulations with MLE and M-estimator.  相似文献   

16.
This article deals with a new profile empirical-likelihood inference for a class of frequently used single-index-coefficient regression models (SICRM), which were proposed by Xia and Li (J. Am. Stat. Assoc. 94:1275–1285, 1999a). Applying the empirical likelihood method (Owen in Biometrika 75:237–249, 1988), a new estimated empirical log-likelihood ratio statistic for the index parameter of the SICRM is proposed. To increase the accuracy of the confidence region, a new profile empirical likelihood for each component of the relevant parameter is obtained by using maximum empirical likelihood estimators (MELE) based on a new and simple estimating equation for the parameters in the SICRM. Hence, the empirical likelihood confidence interval for each component is investigated. Furthermore, corrected empirical likelihoods for functional components are also considered. The resulting statistics are shown to be asymptotically standard chi-squared distributed. Simulation studies are undertaken to assess the finite sample performance of our method. A study of real data is also reported.  相似文献   

17.
Denecke and Müller (CSDA 55:2724–2738, 2011) presented an estimator for the correlation coefficient based on likelihood depth for Gaussian copula and Denecke and Müller (J Stat Planning Inference 142: 2501–2517, 2012) proved a theorem about the consistency of general estimators based on data depth using uniform convergence of the depth measure. In this article, the uniform convergence of the depth measure for correlation is shown so that consistency of the correlation estimator based on depth can be concluded. The uniform convergence is shown with the help of the extension of the Glivenko-Cantelli Lemma by Vapnik- C? ervonenkis classes.  相似文献   

18.
A complete convergence result is obtained for weighted sums of identically distributed ρ *-mixing random variables with E|X 1| α log(1 + |X 1|) < ∞ for some 0 < α ≤ 2. This result partially extends the result of Sung (Stat Papers 52: 447–454, 2011) for negatively associated random variables to ρ *-mixing random variables. It also settles the open problem posed by Zhou et al. (J Inequal Appl, 2011, doi:10.1155/2011/157816).  相似文献   

19.
Let {X n , n ≥ 1} be a sequence of pairwise negatively quadrant dependent (NQD) random variables. In this study, we prove almost sure limit theorems for weighted sums of the random variables. From these results, we obtain a version of the Glivenko–Cantelli lemma for pairwise NQD random variables under some fragile conditions. Moreover, a simulation study is done to compare the convergence rates with those of Azarnoosh (Pak J Statist 19(1):15–23, 2003) and Li et al. (Bull Inst Math 1:281–305, 2006).  相似文献   

20.
Four testing procedures are considered for testing the response rate of one sample correlated binary data with a cluster size of one or two, which often occurs in otolaryngologic and ophthalmologic studies. Although an asymptotic approach is often used for statistical inference, it is criticized for unsatisfactory type I error control in small sample settings. An alternative to the asymptotic approach is an unconditional approach. The first unconditional approach is the one based on estimation, also known as parametric bootstrap (Lee and Young in Stat Probab Lett 71(2):143–153, 2005). The other two unconditional approaches considered in this article are an approach based on maximization (Basu in J Am Stat Assoc 72(358):355–366, 1977), and an approach based on estimation and maximization (Lloyd in Biometrics 64(3):716–723, 2008a). These two unconditional approaches guarantee the test size and are generally more reliable than the asymptotic approach. We compare these four approaches in conjunction with a test proposed by Lee and Dubin (Stat Med 13(12):1241–1252, 1994) and a likelihood ratio test derived in this article, in regards to type I error rate and power for sample sizes from small to medium. An example from an otolaryngologic study is provided to illustrate the various testing procedures. The unconditional approach based on estimation and maximization using the test in Lee and Dubin (Stat Med 13(12):1241–1252, 1994) is preferable due to the power advantageous.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号