首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 10 毫秒
1.
A generalized Holm’s procedure is proposed which can reject several null hypotheses at each step sequentially and also strongly controls the family-wise error rate regardless of the dependence of individual test statistics. The new procedure is more powerful than Holm’s procedure if the number of rejections m and m > 0 is prespecified before the test.  相似文献   

2.
Relative risk frailty models are used extensively in analyzing clustered and/or recurrent time-to-event data. In this paper, Laplace’s approximation for integrals is applied to marginal distributions of data arising from parametric relative risk frailty models. Under regularity conditions, the approximate maximum likelihood estimators (MLE) are consistent with a rate of convergence that depends on both the number of subjects and number of members per subject. We compare the approximate MLE against alternative estimators using limited simulation and demonstrate the utility of Laplace’s approximation approach by analyzing U.S. patient waiting time to deceased kidney transplant data.  相似文献   

3.
The integer-valued autoregressive (INAR) model has been widely used in diverse fields. Since the task of identifying the underlying distribution of time-series models is a crucial step for further inferences, we consider the goodness-of-fit test for the Poisson assumption on first-order INAR models. For a test, we employ Fisher’s dispersion test due to its simplicity and then derive its null limiting distribution. As an illustration, a simulation study and real data analysis are conducted for the counts of coal mining disasters, the monthly crime data set from New South Wales, and the annual numbers of worldwide earthquakes.  相似文献   

4.
5.
A common method of estimating the parameters of dependency in multivariate copula models is by maximum likelihood principle, termed as Inference From Marginals (IFM); see Joe (1997)  [13]. To avoid possible misspecification of the marginal distributions, some authors suggest rank-based procedures for estimating the parameters of dependency in a multivariate copula model. A standard approach for this problem is through maximization of the pseudolikelihood, as discussed in Genest et al. (1995)  [9] and Shih and Louis (1995)  [23]. Alternative estimators based on the inversion of two multivariate extensions of Kendall’s tau, due to Kendall and Babington Smith (1940)  [14] and Joe (1990)  [12], were used in Genest et al. (2011)  [10]. In the literature, dependency of data was considered in the whole data space. However, it may be better to divide the data set into two distinct sets, lower and higher than a threshold, and then evaluate the dependency parameters in these sets. In this way, we may have different dependency parameters in these sets which may shed additional light. For example, in drought analysis, precipitation and minimum temperature may be modeled using copulas in which case we can infer that dependency between precipitation and minimum temperature are severe when they are less than a certain threshold. In this paper, after introducing trimmed Kendall’s tau when such a threshold is imposed, we consider modeling dependency using it as a measure. Asymptotic distribution of trimmed Kendall’s tau is also investigated, and a test for the null hypothesis of equality between Kendall’s tau and trimmed Kendall’s tau is constructed. We can use this hypothesis testing procedure for testing the hypothesis that data are dependent before a threshold value and are independent after the threshold. An explicit form of the asymptotic distribution of trimmed Kendall’s tau and of the mentioned test statistic are also derived for some special families of copulas. Finally, the results of a simulation study and an illustrative example are provided.  相似文献   

6.
7.
ABSTRACT

This paper proposes an adaptive quasi-maximum likelihood estimation (QMLE) when forecasting the volatility of financial data with the generalized autoregressive conditional heteroscedasticity (GARCH) model. When the distribution of volatility data is unspecified or heavy-tailed, we worked out adaptive QMLE based on data by using the scale parameter ηf to identify the discrepancy between wrongly specified innovation density and the true innovation density. With only a few assumptions, this adaptive approach is consistent and asymptotically normal. Moreover, it gains better efficiency under the condition that innovation error is heavy-tailed. Finally, simulation studies and an application show its advantage.  相似文献   

8.
Baker (2008 Baker, R. (2008). An order-statistics-based method for constructing multivariate distributions with fixed marginals. Journal of Multivariate Analysis 99: 23122327.[Crossref], [Web of Science ®] [Google Scholar]) introduced a new method for constructing multivariate distributions with given marginals based on order statistics. In this paper, we provide a test of independence for a pair of absolutely continuous random variables (X, Y) jointly distributed according to Baker’s bivariate distributions. Our purpose is to test the hypothesis that X and Y are independent versus the alternative that X and Y are positively (negatively) quadrant dependent. The asymptotic distribution of the proposed test statistic is investigated. Also, the powers of the proposed test and the class of distribution-free tests proposed by Kochar and Gupta (1987 Kochar, S. G., Gupta, R. P. (1987). Competitors of Kendall-tau test for testing independence against positive quadrant dependence. Biometrika 74(3): 664666.[Crossref], [Web of Science ®] [Google Scholar]) are compared empirically via a simulation study.  相似文献   

9.
10.
A Collateralized dept Obligation (CDO) is a cause of the Hamburger crisis in the USA. We use call function for pricing the CDO. In this paper, we first give uniform and non uniform bounds on normal approximation for the call function without correction term. In this part, we assume that the third and fourth moments of random variables exist, respectively. Second, we present uniform and non uniform bounds on normal approximation for the call function with a correction term under the assumption that the sixth moments of random variables is finite. Our techniques are Stein’s method and the zero bias transformation.  相似文献   

11.
We reconsider the derivation of Blest’s (2003) skewness adjusted version of the classical moment-based coefficient of kurtosis and propose an adaptation of it which generally eliminates the effects of asymmetry a little more successfully. Lower bounds are provided for the two skewness adjusted kurtosis moment measures as functions of the classical coefficient of skewness. The results from a Monte Carlo experiment designed to investigate the sampling properties of numerous moment-based estimators of the two skewness adjusted kurtosis measures are used to identify those estimators with lowest mean squared error for small to medium sized samples drawn from distributions with varying levels of asymmetry and tailweight.  相似文献   

12.
A relevant problem in Statistics relates to obtaining conclusions about the shape of the distribution of an experiment from which a sample is drawn. We will consider this problem when the available information from the experimental performance cannot be exactly perceived, but that rather it may be assimilated with fuzzy information (as defined by L.A. Zadeh, and H. Tanaka, T. Okuda and K. Asai).If the hypothetical distribution is completely specified, the extension of the chi-square goodness of fit test on the basis of some concepts in Fuzzy Sets Theory does not entail difficulties. Nevertheless, if the hypothetical distribution involves unknown parameters, the extension of the chi- square goodness of fit test requires the estimation of those parameters from the fuzzy data. The aim of the present paper is to prove that, under certain natural assumptions, the minimum inaccuracy principle of estimation from fuzzy observations (which we have suggested in a previous paper as an operative extension of the maximum likelihood principle) supplies a suitable method for the above requirement.  相似文献   

13.
We analyze left-truncated and right-censored (LTRC) data using Aalen’s linear models. The integrated square error (ISE) is used to select an optimal bandwidth of the weighted least-squared estimator. We also consider a semiparametric approach for the case when the distribution of the left-truncated variable is parameterized. A simulation study is conducted to investigate the performance of the proposed estimators. The approaches are illustrated with the data of Stanford heart transplant.  相似文献   

14.
Grubbs’s model (Grubbs, Encycl Stat Sci 3:42–549, 1983) is used for comparing several measuring devices, and it is common to assume that the random terms have a normal (or symmetric) distribution. In this paper, we discuss the extension of this model to the class of scale mixtures of skew-normal distributions. Our results provide a useful generalization of the symmetric Grubbs’s model (Osorio et al., Comput Stat Data Anal, 53:1249–1263, 2009) and the asymmetric skew-normal model (Montenegro et al., Stat Pap 51:701–715, 2010). We discuss the EM algorithm for parameter estimation and the local influence method (Cook, J Royal Stat Soc Ser B, 48:133–169, 1986) for assessing the robustness of these parameter estimates under some usual perturbation schemes. The results and methods developed in this paper are illustrated with a numerical example.  相似文献   

15.
Abstract

Technical services staff, along with programmers, supervisors, and frontline librarians, participate in all sorts of systems. Whether they recognize it or not, they are used to interacting with the world through the lens of the systems they work with. In this presentation from the North Carolina Serials Conference, Andreas Orphanides looks at some of the challenges of interacting with the world in terms of systems, discusses the human costs of failing to recognize the limitations of systems, and provides a framework for thinking about systems to help ensure that our systems respect the humanity of their human participants.  相似文献   

16.
In this paper we review some results that have been derived on record values for some well known probability density functions and based on m records from Kumaraswamy’s distribution we obtain estimators for the two parameters and the future sth record value. These estimates are derived using the maximum likelihood and Bayesian approaches. In the Bayesian approach, the two parameters are assumed to be random variables and estimators for the parameters and for the future sth record value are obtained, when we have observed m past record values, using the well known squared error loss (SEL) function and a linear exponential (LINEX) loss function. The findings are illustrated with actual and computer generated data.  相似文献   

17.
A class of invariant estimators with respect to the selection of a base population is developed for estimating the hazard rates in multiple populations. The class generalizes the estimators of Begun and Reid (J. Amer. Statist. Assoc. 78 (1983) 337) and includes the estimator of Mantel and Haenszel (J. Natl. Canser Inst. 22 (1959) 719) as a special case. The estimators have explicit forms and, it is shown that their asymptotic covariance matrices are less than those of the Begun–Reid estimators when the number of populations is greater than two. A Monte-Carlo simulation indicates that the estimators are slightly more efficient than the Cox partial likelihood estimator (Biometrika 62 (2) (1975) 269) for small and medium sample sizes. An example is presented for the illustration of the estimators.  相似文献   

18.
The purpose of this paper is to develop a new linear regression model for count data, namely generalized-Poisson Lindley (GPL) linear model. The GPL linear model is performed by applying generalized linear model to GPL distribution. The model parameters are estimated by the maximum likelihood estimation. We utilize the GPL linear model to fit two real data sets and compare it with the Poisson, negative binomial (NB) and Poisson-weighted exponential (P-WE) models for count data. It is found that the GPL linear model can fit over-dispersed count data, and it shows the highest log-likelihood, the smallest AIC and BIC values. As a consequence, the linear regression model from the GPL distribution is a valuable alternative model to the Poisson, NB, and P-WE models.  相似文献   

19.
The authors state new general results for computing Blaker’s exact confidence interval limits for usual one-parameter discrete distributions. Specific results for implementing an accurate and fast algorithm are made explicit for the binomial, negative binomial, Poisson and hypergeometric model.  相似文献   

20.
Repeated neuropsychological measurements, such as mini-mental state examination (MMSE) scores, are frequently used in Alzheimer’s disease (AD) research to study change in cognitive function of AD patients. A question of interest among dementia researchers is whether some AD patients exhibit transient “plateaus” of cognitive function in the course of the disease. We consider a statistical approach to this question, based on irregularly spaced repeated MMSE scores. We propose an algorithm that formalizes the measurement of an apparent cognitive plateau, and a procedure to evaluate the evidence of plateaus in AD using this algorithm based on applying the algorithm to the observed data and to data sets simulated from a linear mixed model. We apply these methods to repeated MMSE data from the Michigan Alzheimer’s Disease Research Center, finding a high rate of apparent plateaus and also a high rate of false discovery. Simulation studies are also conducted to assess the performance of the algorithm. In general, the false discovery rate of the algorithm is high unless the rate of decline is high compared with the measurement error of the cognitive test. It is argued that the results are not a problem of the specific algorithm chosen, but reflect a lack of information concerning the presence of plateaus in the data.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号