首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In this work we prove that for an exchangeable multivariate normal distribution the joint distribution of a linear combination of order statistics and a linear combination of their concomitants together with an auxiliary variable is skew normal. We also investigate some special cases, thus extending the results of Olkin and Viana (J Am Stat Assoc 90:1373–1379, 1995), Loperfido (Test 17:370–380, 2008a) and Sheikhi and Jamalizadeh (Paper 52:885–892, 2011).  相似文献   

2.
In this article, one- and two-sample Bayesian prediction intervals based on progressively Type-II censored data are derived. For the illustration of the developed results, the exponential, Pareto, Weibull and Burr Type-XII models are used as examples. Some of the previous results in the literature such as Dunsmore (Technometrics 16:455–460, 1974), Nigm and Hamdy (Commun Stat Theory Methods 16:1761–1772, 1987), Nigm (Commun Stat Theory Methods 18:897–911, 1989), Al-Hussaini and Jaheen (Commun Stat Theory Methods 24:1829–1842, 1995), Al-Hussaini (J Stat Plan Inference 79:79–91, 1999), Ali Mousa (J Stat Comput Simul 71: 163–181, 2001) and Ali Mousa and Jaheen (Stat Pap 43:587–593, 2002) can be achieved as special cases of our results. Finally, some numerical computations are presented for illustrating all the proposed inferential procedures.  相似文献   

3.
Grubbs’s model (Grubbs, Encycl Stat Sci 3:42–549, 1983) is used for comparing several measuring devices, and it is common to assume that the random terms have a normal (or symmetric) distribution. In this paper, we discuss the extension of this model to the class of scale mixtures of skew-normal distributions. Our results provide a useful generalization of the symmetric Grubbs’s model (Osorio et al., Comput Stat Data Anal, 53:1249–1263, 2009) and the asymmetric skew-normal model (Montenegro et al., Stat Pap 51:701–715, 2010). We discuss the EM algorithm for parameter estimation and the local influence method (Cook, J Royal Stat Soc Ser B, 48:133–169, 1986) for assessing the robustness of these parameter estimates under some usual perturbation schemes. The results and methods developed in this paper are illustrated with a numerical example.  相似文献   

4.
The unique copula of a continuous random pair \((X,Y)\) is said to be radially symmetric if and only if it is also the copula of the pair \((-X,-Y)\) . This paper revisits the recently considered issue of testing for radial symmetry. Three rank-based statistics are proposed to this end which are asymptotically equivalent but simpler to compute than those of Bouzebda and Cherfi (J Stat Plan Inference 142:1262–1271, 2012). Their limiting null distribution and its approximation using the multiplier bootstrap are discussed. The finite-sample properties of the resulting tests are assessed via simulations. The asymptotic distribution of one of the test statistics is also computed under an arbitrary alternative, thereby correcting an error in the recent work of Dehgani et al. (Stat Pap 54:271–286, 2013).  相似文献   

5.
In this paper, we derive elementary M- and optimally robust asymptotic linear (AL)-estimates for the parameters of an Ornstein–Uhlenbeck process. Simulation and estimation of the process are already well-studied, see Iacus (Simulation and inference for stochastic differential equations. Springer, New York, 2008). However, in order to protect against outliers and deviations from the ideal law the formulation of suitable neighborhood models and a corresponding robustification of the estimators are necessary. As a measure of robustness, we consider the maximum asymptotic mean square error (maxasyMSE), which is determined by the influence curve (IC) of AL estimates. The IC represents the standardized influence of an individual observation on the estimator given the past. In a first step, we extend the method of M-estimation from Huber (Robust statistics. Wiley, New York, 1981). In a second step, we apply the general theory based on local asymptotic normality, AL estimates, and shrinking neighborhoods due to Kohl et?al. (Stat Methods Appl 19:333–354, 2010), Rieder (Robust asymptotic statistics. Springer, New York, 1994), Rieder (2003), and Staab (1984). This leads to optimally robust ICs whose graph exhibits surprising behavior. In the end, we discuss the estimator construction, i.e. the problem of constructing an estimator from the family of optimal ICs. Therefore we carry out in our context the One-Step construction dating back to LeCam (Asymptotic methods in statistical decision theory. Springer, New York, 1969) and compare it by means of simulations with MLE and M-estimator.  相似文献   

6.
Four testing procedures are considered for testing the response rate of one sample correlated binary data with a cluster size of one or two, which often occurs in otolaryngologic and ophthalmologic studies. Although an asymptotic approach is often used for statistical inference, it is criticized for unsatisfactory type I error control in small sample settings. An alternative to the asymptotic approach is an unconditional approach. The first unconditional approach is the one based on estimation, also known as parametric bootstrap (Lee and Young in Stat Probab Lett 71(2):143–153, 2005). The other two unconditional approaches considered in this article are an approach based on maximization (Basu in J Am Stat Assoc 72(358):355–366, 1977), and an approach based on estimation and maximization (Lloyd in Biometrics 64(3):716–723, 2008a). These two unconditional approaches guarantee the test size and are generally more reliable than the asymptotic approach. We compare these four approaches in conjunction with a test proposed by Lee and Dubin (Stat Med 13(12):1241–1252, 1994) and a likelihood ratio test derived in this article, in regards to type I error rate and power for sample sizes from small to medium. An example from an otolaryngologic study is provided to illustrate the various testing procedures. The unconditional approach based on estimation and maximization using the test in Lee and Dubin (Stat Med 13(12):1241–1252, 1994) is preferable due to the power advantageous.  相似文献   

7.
Widely spread tools within the area of Statistical Process Control are control charts of various designs. Control chart applications are used to keep process parameters (e.g., mean \(\mu \) , standard deviation \(\sigma \) or percent defective \(p\) ) under surveillance so that a certain level of process quality can be assured. Well-established schemes such as exponentially weighted moving average charts (EWMA), cumulative sum charts or the classical Shewhart charts are frequently treated in theory and practice. Since Shewhart introduced a \(p\) chart (for attribute data), the question of controlling the percent defective was rarely a subject of an analysis, while several extensions were made using more advanced schemes (e.g., EWMA) to monitor effects on parameter deteriorations. Here, performance comparisons between a newly designed EWMA \(p\) control chart for application to continuous types of data, \(p=f(\mu ,\sigma )\) , and popular EWMA designs ( \(\bar{X}\) , \(\bar{X}\) - \(S^2\) ) are presented. Thus, isolines of the average run length are introduced for each scheme taking both changes in mean and standard deviation into account. Adequate extensions of the classical EWMA designs are used to make these specific comparisons feasible. The results presented are computed by using numerical methods.  相似文献   

8.
In this paper, we discuss the extension of some diagnostic procedures to multivariate measurement error models with scale mixtures of skew-normal distributions (Lachos et?al., Statistics 44:541?C556, 2010c). This class provides a useful generalization of normal (and skew-normal) measurement error models since the random term distributions cover symmetric, asymmetric and heavy-tailed distributions, such as skew-t, skew-slash and skew-contaminated normal, among others. Inspired by the EM algorithm proposed by Lachos et?al. (Statistics 44:541?C556, 2010c), we develop a local influence analysis for measurement error models, following Zhu and Lee??s (J R Stat Soc B 63:111?C126, 2001) approach. This is because the observed data log-likelihood function associated with the proposed model is somewhat complex and Cook??s well-known approach can be very difficult to apply to achieve local influence measures. Some useful perturbation schemes are also discussed. In addition, a score test for assessing the homogeneity of the skewness parameter vector is presented. Finally, the methodology is exemplified through a real data set, illustrating the usefulness of the proposed methodology.  相似文献   

9.
The exponential COM-Poisson distribution   总被引:1,自引:1,他引:0  
The Conway-Maxwell Poisson (COMP) distribution as an extension of the Poisson distribution is a popular model for analyzing counting data. For the first time, we introduce a new three parameter distribution, so-called the exponential-Conway-Maxwell Poisson (ECOMP) distribution, that contains as sub-models the exponential-geometric and exponential-Poisson distributions proposed by Adamidis and Loukas (Stat Probab Lett 39:35?C42, 1998) and Ku? (Comput Stat Data Anal 51:4497?C4509, 2007), respectively. The new density function can be expressed as a mixture of exponential density functions. Expansions for moments, moment generating function and some statistical measures are provided. The density function of the order statistics can also be expressed as a mixture of exponential densities. We derive two formulae for the moments of order statistics. The elements of the observed information matrix are provided. Two applications illustrate the usefulness of the new distribution to analyze positive data.  相似文献   

10.
In this article we have envisaged an efficient generalized class of estimators for finite population variance of the study variable in simple random sampling using information on an auxiliary variable. Asymptotic expressions of the bias and mean square error of the proposed class of estimators have been obtained. Asymptotic optimum estimator in the proposed class of estimators has been identified with its mean square error formula. We have shown that the proposed class of estimators is more efficient than the usual unbiased, difference, Das and Tripathi (Sankhya C 40:139–148, 1978), Isaki (J. Am. Stat. Assoc. 78:117–123, 1983), Singh et al. (Curr. Sci. 57:1331–1334, 1988), Upadhyaya and Singh (Vikram Math. J. 19:14–17, 1999b), Kadilar and Cingi (Appl. Math. Comput. 173:2, 1047–1059, 2006a) and other estimators/classes of estimators. In the support of the theoretically results we have given an empirical study.  相似文献   

11.
Skew-symmetric distributions of various types have been the center of attraction by many researchers in the literature. In this article, we shall introduce another more general class of skew distributions, specially related to the Laplace distribution. This new class contains some previously known skew distributions. We shall investigate different characteristics of members of this class such as its moments, thus generalizing a result of Umbach (Stat Probab Lett 76:507?C512, 2006), limiting behavior, moment generating function, unimodality and reveal its natural occurrence as the distribution of some order statistics. In addition, we will generalize a result of Aryal and Rao (Nonlinear Anal 63:639?C646, 2005) in connection with truncated skew-Laplace distribution and study its certain stochastic orderings. Some illustrative examples are also provided.  相似文献   

12.
Online (also ‘real-time’ or ‘sequential’) signal extraction from noisy and outlier-interfered data streams is a basic but challenging goal. Fitting a robust Repeated Median (Siegel in Biometrika 69:242–244, 1982) regression line in a moving time window has turned out to be a promising approach (Davies et al. in J. Stat. Plan. Inference 122:65–78, 2004; Gather et al. in Comput. Stat. 21:33–51, 2006; Schettlinger et al. in Biomed. Eng. 51:49–56, 2006). The level of the regression line at the rightmost window position, which equates to the current time point in an online application, is then used for signal extraction. However, the choice of the window width has a large impact on the signal extraction, and it is impossible to predetermine an optimal fixed window width for data streams which exhibit signal changes like level shifts and sudden trend changes. We therefore propose a robust test procedure for the online detection of such signal changes. An algorithm including the test allows for online window width adaption, meaning that the window width is chosen w.r.t. the current data situation at each time point. Comparison studies show that our new procedure outperforms an existing Repeated Median filter with automatic window width selection (Schettlinger et al. in Int. J. Adapt. Control Signal Process. 24:346–362, 2010).  相似文献   

13.
Control chart is an important statistical technique that is used to monitor the quality of a process. Shewhart control charts are used to detect larger disturbances in the process parameters, whereas cumulative sum (CUSUM) and exponential weighted moving average (EWMA) are meant for smaller and moderate changes. In this study, we enhanced mixed EWMA–CUSUM control charts with varying fast initial response (FIR) features and also with a runs rule of two out of three successive points that fall above the upper control limit. We investigate their run-length properties. The proposed control charting schemes are compared with the existing counterparts including classical CUSUM, classical EWMA, FIR CUSUM, FIR EWMA, mixed EWMA–CUSUM, 2/3 modified EWMA, and 2/3 CUSUM control charting schemes. A case study is presented for practical considerations using a real data set.  相似文献   

14.
There are few readily-implemented tests for goodness-of-fit for the Cox proportional hazards model with time-varying covariates. Through simulations, we assess the power of tests by Cox (J R Stat Soc B (Methodol) 34(2):187–220, 1972), Grambsch and Therneau (Biometrika 81(3):515–526, 1994), and Lin et al. (Biometrics 62:803–812, 2006). Results show that power is highly variable depending on the time to violation of proportional hazards, the magnitude of the change in hazard ratio, and the direction of the change. Because these characteristics are unknown outside of simulation studies, none of the tests examined is expected to have high power in real applications. While all of these tests are theoretically interesting, they appear to be of limited practical value.  相似文献   

15.
Approximate Bayesian Computational (ABC) methods, or likelihood-free methods, have appeared in the past fifteen years as useful methods to perform Bayesian analysis when the likelihood is analytically or computationally intractable. Several ABC methods have been proposed: MCMC methods have been developed by Marjoram et al. (2003) and by Bortot et al. (2007) for instance, and sequential methods have been proposed among others by Sisson et al. (2007), Beaumont et al. (2009) and Del Moral et al. (2012). Recently, sequential ABC methods have appeared as an alternative to ABC-PMC methods (see for instance McKinley et al., 2009; Sisson et al., 2007). In this paper a new algorithm combining population-based MCMC methods with ABC requirements is proposed, using an analogy with the parallel tempering algorithm (Geyer 1991). Performance is compared with existing ABC algorithms on simulations and on a real example.  相似文献   

16.
I review some key ideas and models in survival analysis with emphasis on modeling the effects of covariates on survival times. I focus on the proportional hazards model of Cox (J R Stat Soc B 34:187–220, 1972), its extensions and alternatives, including the accelerated life model. I briefly describe some models for competing risks data, multiple and repeated event-time data and multivariate survival data.  相似文献   

17.
Denecke and Müller (CSDA 55:2724–2738, 2011) presented an estimator for the correlation coefficient based on likelihood depth for Gaussian copula and Denecke and Müller (J Stat Planning Inference 142: 2501–2517, 2012) proved a theorem about the consistency of general estimators based on data depth using uniform convergence of the depth measure. In this article, the uniform convergence of the depth measure for correlation is shown so that consistency of the correlation estimator based on depth can be concluded. The uniform convergence is shown with the help of the extension of the Glivenko-Cantelli Lemma by Vapnik- C? ervonenkis classes.  相似文献   

18.
A new discrete distribution depending on two parameters $\alpha >-1$ and $\sigma >0$ is obtained by discretizing the generalized normal distribution proposed in García et al. (Comput Stat and Data Anal 54:2021–2034, 2010), which was derived from the normal distribution by using the Marshall and Olkin (Biometrika 84(3):641–652, 1997) scheme. The particular case $\alpha =1$ leads us to the discrete half-normal distribution which is different from the discrete half-normal distribution proposed previously in the statistical literature. This distribution is unimodal, overdispersed (the responses show a mean sample greater than the variance) and with an increasing failure rate. We revise its properties and the question of parameter estimation. Expected frequencies were calculated for two overdispersed and underdispersed (the responses show a variance greater than the mean) examples, and the distribution was found to provide a very satisfactory fit.  相似文献   

19.
The overall Type I error computed based on the traditional means may be inflated if many hypotheses are compared simultaneously. The family-wise error rate (FWER) and false discovery rate (FDR) are some of commonly used error rates to measure Type I error under the multiple hypothesis setting. Many controlling FWER and FDR procedures have been proposed and have the ability to control the desired FWER/FDR under certain scenarios. Nevertheless, these controlling procedures become too conservative when only some hypotheses are from the null. Benjamini and Hochberg (J. Educ. Behav. Stat. 25:60–83, 2000) proposed an adaptive FDR-controlling procedure that adapts the information of the number of true null hypotheses (m 0) to overcome this problem. Since m 0 is unknown, estimators of m 0 are needed. Benjamini and Hochberg (J. Educ. Behav. Stat. 25:60–83, 2000) suggested a graphical approach to construct an estimator of m 0, which is shown to overestimate m 0 (see Hwang in J. Stat. Comput. Simul. 81:207–220, 2011). Following a similar construction, this paper proposes new estimators of m 0. Monte Carlo simulations are used to evaluate accuracy and precision of new estimators and the feasibility of these new adaptive procedures is evaluated under various simulation settings.  相似文献   

20.
This article deals with a new profile empirical-likelihood inference for a class of frequently used single-index-coefficient regression models (SICRM), which were proposed by Xia and Li (J. Am. Stat. Assoc. 94:1275–1285, 1999a). Applying the empirical likelihood method (Owen in Biometrika 75:237–249, 1988), a new estimated empirical log-likelihood ratio statistic for the index parameter of the SICRM is proposed. To increase the accuracy of the confidence region, a new profile empirical likelihood for each component of the relevant parameter is obtained by using maximum empirical likelihood estimators (MELE) based on a new and simple estimating equation for the parameters in the SICRM. Hence, the empirical likelihood confidence interval for each component is investigated. Furthermore, corrected empirical likelihoods for functional components are also considered. The resulting statistics are shown to be asymptotically standard chi-squared distributed. Simulation studies are undertaken to assess the finite sample performance of our method. A study of real data is also reported.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号