首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
In this paper, we derive elementary M- and optimally robust asymptotic linear (AL)-estimates for the parameters of an Ornstein–Uhlenbeck process. Simulation and estimation of the process are already well-studied, see Iacus (Simulation and inference for stochastic differential equations. Springer, New York, 2008). However, in order to protect against outliers and deviations from the ideal law the formulation of suitable neighborhood models and a corresponding robustification of the estimators are necessary. As a measure of robustness, we consider the maximum asymptotic mean square error (maxasyMSE), which is determined by the influence curve (IC) of AL estimates. The IC represents the standardized influence of an individual observation on the estimator given the past. In a first step, we extend the method of M-estimation from Huber (Robust statistics. Wiley, New York, 1981). In a second step, we apply the general theory based on local asymptotic normality, AL estimates, and shrinking neighborhoods due to Kohl et?al. (Stat Methods Appl 19:333–354, 2010), Rieder (Robust asymptotic statistics. Springer, New York, 1994), Rieder (2003), and Staab (1984). This leads to optimally robust ICs whose graph exhibits surprising behavior. In the end, we discuss the estimator construction, i.e. the problem of constructing an estimator from the family of optimal ICs. Therefore we carry out in our context the One-Step construction dating back to LeCam (Asymptotic methods in statistical decision theory. Springer, New York, 1969) and compare it by means of simulations with MLE and M-estimator.  相似文献   

2.
In this article, one- and two-sample Bayesian prediction intervals based on progressively Type-II censored data are derived. For the illustration of the developed results, the exponential, Pareto, Weibull and Burr Type-XII models are used as examples. Some of the previous results in the literature such as Dunsmore (Technometrics 16:455–460, 1974), Nigm and Hamdy (Commun Stat Theory Methods 16:1761–1772, 1987), Nigm (Commun Stat Theory Methods 18:897–911, 1989), Al-Hussaini and Jaheen (Commun Stat Theory Methods 24:1829–1842, 1995), Al-Hussaini (J Stat Plan Inference 79:79–91, 1999), Ali Mousa (J Stat Comput Simul 71: 163–181, 2001) and Ali Mousa and Jaheen (Stat Pap 43:587–593, 2002) can be achieved as special cases of our results. Finally, some numerical computations are presented for illustrating all the proposed inferential procedures.  相似文献   

3.
Approximate Bayesian Computational (ABC) methods, or likelihood-free methods, have appeared in the past fifteen years as useful methods to perform Bayesian analysis when the likelihood is analytically or computationally intractable. Several ABC methods have been proposed: MCMC methods have been developed by Marjoram et al. (2003) and by Bortot et al. (2007) for instance, and sequential methods have been proposed among others by Sisson et al. (2007), Beaumont et al. (2009) and Del Moral et al. (2012). Recently, sequential ABC methods have appeared as an alternative to ABC-PMC methods (see for instance McKinley et al., 2009; Sisson et al., 2007). In this paper a new algorithm combining population-based MCMC methods with ABC requirements is proposed, using an analogy with the parallel tempering algorithm (Geyer 1991). Performance is compared with existing ABC algorithms on simulations and on a real example.  相似文献   

4.
In this paper, maximum likelihood and Bayesian approaches have been used to obtain the estimation of \(P(X<Y)\) based on a set of upper record values from Kumaraswamy distribution. The existence and uniqueness of the maximum likelihood estimates of the Kumaraswamy distribution parameters are obtained. Confidence intervals, exact and approximate, as well as Bayesian credible intervals are constructed. Bayes estimators have been developed under symmetric (squared error) and asymmetric (LINEX) loss functions using the conjugate and non informative prior distributions. The approximation forms of Lindley (Trabajos de Estadistica 3:281–288, 1980) and Tierney and Kadane (J Am Stat Assoc 81:82–86, 1986) are used for the Bayesian cases. Monte Carlo simulations are performed to compare the different proposed methods.  相似文献   

5.
In this work we prove that for an exchangeable multivariate normal distribution the joint distribution of a linear combination of order statistics and a linear combination of their concomitants together with an auxiliary variable is skew normal. We also investigate some special cases, thus extending the results of Olkin and Viana (J Am Stat Assoc 90:1373–1379, 1995), Loperfido (Test 17:370–380, 2008a) and Sheikhi and Jamalizadeh (Paper 52:885–892, 2011).  相似文献   

6.
A new discrete distribution depending on two parameters $\alpha >-1$ and $\sigma >0$ is obtained by discretizing the generalized normal distribution proposed in García et al. (Comput Stat and Data Anal 54:2021–2034, 2010), which was derived from the normal distribution by using the Marshall and Olkin (Biometrika 84(3):641–652, 1997) scheme. The particular case $\alpha =1$ leads us to the discrete half-normal distribution which is different from the discrete half-normal distribution proposed previously in the statistical literature. This distribution is unimodal, overdispersed (the responses show a mean sample greater than the variance) and with an increasing failure rate. We revise its properties and the question of parameter estimation. Expected frequencies were calculated for two overdispersed and underdispersed (the responses show a variance greater than the mean) examples, and the distribution was found to provide a very satisfactory fit.  相似文献   

7.
Four testing procedures are considered for testing the response rate of one sample correlated binary data with a cluster size of one or two, which often occurs in otolaryngologic and ophthalmologic studies. Although an asymptotic approach is often used for statistical inference, it is criticized for unsatisfactory type I error control in small sample settings. An alternative to the asymptotic approach is an unconditional approach. The first unconditional approach is the one based on estimation, also known as parametric bootstrap (Lee and Young in Stat Probab Lett 71(2):143–153, 2005). The other two unconditional approaches considered in this article are an approach based on maximization (Basu in J Am Stat Assoc 72(358):355–366, 1977), and an approach based on estimation and maximization (Lloyd in Biometrics 64(3):716–723, 2008a). These two unconditional approaches guarantee the test size and are generally more reliable than the asymptotic approach. We compare these four approaches in conjunction with a test proposed by Lee and Dubin (Stat Med 13(12):1241–1252, 1994) and a likelihood ratio test derived in this article, in regards to type I error rate and power for sample sizes from small to medium. An example from an otolaryngologic study is provided to illustrate the various testing procedures. The unconditional approach based on estimation and maximization using the test in Lee and Dubin (Stat Med 13(12):1241–1252, 1994) is preferable due to the power advantageous.  相似文献   

8.
There are few readily-implemented tests for goodness-of-fit for the Cox proportional hazards model with time-varying covariates. Through simulations, we assess the power of tests by Cox (J R Stat Soc B (Methodol) 34(2):187–220, 1972), Grambsch and Therneau (Biometrika 81(3):515–526, 1994), and Lin et al. (Biometrics 62:803–812, 2006). Results show that power is highly variable depending on the time to violation of proportional hazards, the magnitude of the change in hazard ratio, and the direction of the change. Because these characteristics are unknown outside of simulation studies, none of the tests examined is expected to have high power in real applications. While all of these tests are theoretically interesting, they appear to be of limited practical value.  相似文献   

9.
We deal with sampling by variables with two-way protection in the case of a $N\>(\mu ,\sigma ^2)$ distributed characteristic with unknown $\sigma $ . The LR sampling plan proposed by Lieberman and Resnikoff (JASA 50: 457 ${-}$ 516, 1955) and the BSK sampling plan proposed by Bruhn-Suhr and Krumbholz (Stat. Papers 31: 195–207, 1990) are based on the UMVU and the plug-in estimator, respectively. For given $p_1$ (AQL), $p_2$ (RQL) and $\alpha ,\beta $ (type I and II errors) we present an algorithm allowing to determine the optimal LR and BSK plans having minimal sample size among all plans satisfying the corresponding two-point condition on the OC. An R (R: A language and environment for statistical computing, R Foundation for Statistical Computing, Vienna, Austria. ISBN 3-900051-07-0, URL http://www.R-project.org/ 2012) package, ExLiebeRes‘ (Krumbholz and Steuer ExLiebeRes: calculating exact LR- and BSK-plans, R-package version 0.9.9. http://exlieberes.r-forge.r-project.org 2012) implementing that algorithm is provided to the public.  相似文献   

10.
Grubbs’s model (Grubbs, Encycl Stat Sci 3:42–549, 1983) is used for comparing several measuring devices, and it is common to assume that the random terms have a normal (or symmetric) distribution. In this paper, we discuss the extension of this model to the class of scale mixtures of skew-normal distributions. Our results provide a useful generalization of the symmetric Grubbs’s model (Osorio et al., Comput Stat Data Anal, 53:1249–1263, 2009) and the asymmetric skew-normal model (Montenegro et al., Stat Pap 51:701–715, 2010). We discuss the EM algorithm for parameter estimation and the local influence method (Cook, J Royal Stat Soc Ser B, 48:133–169, 1986) for assessing the robustness of these parameter estimates under some usual perturbation schemes. The results and methods developed in this paper are illustrated with a numerical example.  相似文献   

11.
The unique copula of a continuous random pair \((X,Y)\) is said to be radially symmetric if and only if it is also the copula of the pair \((-X,-Y)\) . This paper revisits the recently considered issue of testing for radial symmetry. Three rank-based statistics are proposed to this end which are asymptotically equivalent but simpler to compute than those of Bouzebda and Cherfi (J Stat Plan Inference 142:1262–1271, 2012). Their limiting null distribution and its approximation using the multiplier bootstrap are discussed. The finite-sample properties of the resulting tests are assessed via simulations. The asymptotic distribution of one of the test statistics is also computed under an arbitrary alternative, thereby correcting an error in the recent work of Dehgani et al. (Stat Pap 54:271–286, 2013).  相似文献   

12.
Denecke and Müller (CSDA 55:2724–2738, 2011) presented an estimator for the correlation coefficient based on likelihood depth for Gaussian copula and Denecke and Müller (J Stat Planning Inference 142: 2501–2517, 2012) proved a theorem about the consistency of general estimators based on data depth using uniform convergence of the depth measure. In this article, the uniform convergence of the depth measure for correlation is shown so that consistency of the correlation estimator based on depth can be concluded. The uniform convergence is shown with the help of the extension of the Glivenko-Cantelli Lemma by Vapnik- C? ervonenkis classes.  相似文献   

13.
The overall Type I error computed based on the traditional means may be inflated if many hypotheses are compared simultaneously. The family-wise error rate (FWER) and false discovery rate (FDR) are some of commonly used error rates to measure Type I error under the multiple hypothesis setting. Many controlling FWER and FDR procedures have been proposed and have the ability to control the desired FWER/FDR under certain scenarios. Nevertheless, these controlling procedures become too conservative when only some hypotheses are from the null. Benjamini and Hochberg (J. Educ. Behav. Stat. 25:60–83, 2000) proposed an adaptive FDR-controlling procedure that adapts the information of the number of true null hypotheses (m 0) to overcome this problem. Since m 0 is unknown, estimators of m 0 are needed. Benjamini and Hochberg (J. Educ. Behav. Stat. 25:60–83, 2000) suggested a graphical approach to construct an estimator of m 0, which is shown to overestimate m 0 (see Hwang in J. Stat. Comput. Simul. 81:207–220, 2011). Following a similar construction, this paper proposes new estimators of m 0. Monte Carlo simulations are used to evaluate accuracy and precision of new estimators and the feasibility of these new adaptive procedures is evaluated under various simulation settings.  相似文献   

14.
Online (also ‘real-time’ or ‘sequential’) signal extraction from noisy and outlier-interfered data streams is a basic but challenging goal. Fitting a robust Repeated Median (Siegel in Biometrika 69:242–244, 1982) regression line in a moving time window has turned out to be a promising approach (Davies et al. in J. Stat. Plan. Inference 122:65–78, 2004; Gather et al. in Comput. Stat. 21:33–51, 2006; Schettlinger et al. in Biomed. Eng. 51:49–56, 2006). The level of the regression line at the rightmost window position, which equates to the current time point in an online application, is then used for signal extraction. However, the choice of the window width has a large impact on the signal extraction, and it is impossible to predetermine an optimal fixed window width for data streams which exhibit signal changes like level shifts and sudden trend changes. We therefore propose a robust test procedure for the online detection of such signal changes. An algorithm including the test allows for online window width adaption, meaning that the window width is chosen w.r.t. the current data situation at each time point. Comparison studies show that our new procedure outperforms an existing Repeated Median filter with automatic window width selection (Schettlinger et al. in Int. J. Adapt. Control Signal Process. 24:346–362, 2010).  相似文献   

15.
Doubly truncated survival data arise when event times are observed only if they occur within subject specific intervals of times. Existing iterative estimation procedures for doubly truncated data are computationally intensive (Turnbull 38:290–295, 1976; Efron and Petrosian 94:824–825, 1999; Shen 62:835–853, 2010a). These procedures assume that the event time is independent of the truncation times, in the sample space that conforms to their requisite ordering. This type of independence is referred to as quasi-independence. In this paper we identify and consider two special cases of quasi-independence: complete quasi-independence and complete truncation dependence. For the case of complete quasi-independence, we derive the nonparametric maximum likelihood estimator in closed-form. For the case of complete truncation dependence, we derive a closed-form nonparametric estimator that requires some external information, and a semi-parametric maximum likelihood estimator that achieves improved efficiency relative to the standard nonparametric maximum likelihood estimator, in the absence of external information. We demonstrate the consistency and potentially improved efficiency of the estimators in simulation studies, and illustrate their use in application to studies of AIDS incubation and Parkinson’s disease age of onset.  相似文献   

16.
Let {X n , n ≥ 1} be a sequence of pairwise negatively quadrant dependent (NQD) random variables. In this study, we prove almost sure limit theorems for weighted sums of the random variables. From these results, we obtain a version of the Glivenko–Cantelli lemma for pairwise NQD random variables under some fragile conditions. Moreover, a simulation study is done to compare the convergence rates with those of Azarnoosh (Pak J Statist 19(1):15–23, 2003) and Li et al. (Bull Inst Math 1:281–305, 2006).  相似文献   

17.
This paper considers testing for cross-sectional dependence in a panel factor model. Based on the model considered by Bai (Econometrica 71: 135–171, 2003), we investigate the use of a simple $F$ test for testing for cross-sectional dependence when the factor may be known or unknown. The limiting distributions of these $F$ test statistics are derived when the cross-sectional dimension and the time-series dimension are both large. The main contribution of this paper is to propose a wild bootstrap $F$  test which is shown to be consistent and which performs well in Monte Carlo simulations especially when the factor is unknown.  相似文献   

18.
Azzalini (Scand J Stat 12:171–178, 1985) provided a methodology to introduce skewness in a normal distribution. Using the same method of Azzalini (1985), the skew logistic distribution can be easily obtained by introducing skewness to the logistic distribution. For the skew logistic distribution, the likelihood equations do not provide explicit solutions for the location and scale parameters. We present a simple method of deriving explicit estimators by approximating the likelihood equations appropriately. We examine numerically the bias and variance of these estimators and show that these estimators are as efficient as the maximum likelihood estimators (MLEs). The coverage probabilities of the pivotal quantities (for location and scale parameters) based on asymptotic normality are shown to be unsatisfactory, especially when the effective sample size is small. To improve the coverage probabilities and for constructing confidence intervals, we suggest the use of simulated percentage points. Finally, we present a numerical example to illustrate the methods of inference developed here.  相似文献   

19.
This article considers the problem of estimating the population mean on the current (second) occasion using multi-auxiliary information in successive sampling over two occasions. A general class of estimators is proposed for estimating population mean on the current occasion and expressions for bias and mean square error for these estimators are obtained up to first degree of approximation. The minimum variance bound estimator in the proposed class is discussed. Many popular estimators have been shown to belong to this class. Optimum replacement policy is also discussed. Finally, the superiority of the proposed class of estimators over multivariate version of chain type ratio estimator envisaged by Singh (2005 Singh, G.N. (2005). On the use of chain type ratio estimator in successive sampling. Stat Transition 7:2126. [Google Scholar]) is established empirically.  相似文献   

20.
This article deals with a new profile empirical-likelihood inference for a class of frequently used single-index-coefficient regression models (SICRM), which were proposed by Xia and Li (J. Am. Stat. Assoc. 94:1275–1285, 1999a). Applying the empirical likelihood method (Owen in Biometrika 75:237–249, 1988), a new estimated empirical log-likelihood ratio statistic for the index parameter of the SICRM is proposed. To increase the accuracy of the confidence region, a new profile empirical likelihood for each component of the relevant parameter is obtained by using maximum empirical likelihood estimators (MELE) based on a new and simple estimating equation for the parameters in the SICRM. Hence, the empirical likelihood confidence interval for each component is investigated. Furthermore, corrected empirical likelihoods for functional components are also considered. The resulting statistics are shown to be asymptotically standard chi-squared distributed. Simulation studies are undertaken to assess the finite sample performance of our method. A study of real data is also reported.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号