共查询到20条相似文献,搜索用时 31 毫秒
1.
《统计学通讯:理论与方法》2013,42(4):785-786
ABSTRACT The concept of generalized order statistics was introduced by Kamps (1995) to unify several concepts that have been used in statistics such as order statistics, record values, and sequential order statistics. Estimation of the parameters of the Burr type XII distribution are obtained based on generalized order statistics. The maximum likelihood and Bayes methods of estimation are used for this purposes. The Bayes estimates are derived by using the approximation form of Lindley (1980). Estimation based on upper records from the Burr model is obtained and compared by using Monte Carlo simulation study. Our results are specialized to the results of AL-Hussaini and Jaheen (1992) which are based on ordinary order statistics. 相似文献
2.
《统计学通讯:理论与方法》2013,42(12):2321-2338
Abstract Bhattacharyya and Soejoeti (Bhattacharyya, G. K., Soejoeti, Z. A. (1989). Tampered failure rate model for step-stress accelerated life test. Commun. Statist.—Theory Meth. 18(5):1627–1643.) pro- posed the TFR model for step-stress accelerated life tests. Under the TFR model, this article proves that the maximum likelihood estimate of the shape parameters is unique for the Weibull distribution in a multiple step-stress accelerated life test, and investigates the accuracy of the maximum likelihood estimate using the Monte-Carlo simulation. 相似文献
3.
《统计学通讯:理论与方法》2013,42(9):1605-1616
ABSTRACT We consider an extended growth curve model with k hierarchical within-individuals design matrices. The model includes the one whose mean structure consists of polynomial growth curves with k different degrees. First we propose certain simple estimators of the mean and covariance parameters which are closely related to the MLE's. Using these estimators we construct simultaneous confidence regions for each or all of k growth curves which is an extension of Fujikoshi.[2] A numerical example with k = 3 is also given. 相似文献
4.
《统计学通讯:理论与方法》2013,42(10):1951-1980
Abstract The heteroskedasticity-consistent covariance matrix estimator proposed by White [White, H. A. (1980). Heteroskedasticity-consistent covariance matrix estimator and a direct test for heteroskedasticity. Econometrica 48:817–838], also known as HC0, is commonly used in practical applications and is implemented into a number of statistical software. Cribari–Neto et al. [Cribari–Neto, F., Ferrari, S. L. P., Cordeiro, G. M. (2000). Improved heteroscedasticity–consistent covariance matrix estimators. Biometrika 87:907–918] have developed a bias-adjustment scheme that delivers bias-corrected White estimators. There are several variants of the original White estimator that are also commonly used by practitioners. These include the HC1, HC2, and HC3 estimators, which have proven to have superior small-sample behavior relative to White's estimator. This paper defines a general bias-correction mechamism that can be applied not only to White's estimator, but to variants of this estimator as well, such as HC1, HC2, and HC3. Numerical evidence on the usefulness of the proposed corrections is also presented. Overall, the results favor the sequence of improved HC2 estimators. 相似文献
5.
《统计学通讯:理论与方法》2013,42(9):1789-1799
Abstract In a recent article Hsueh et al. (Hsueh, H.-M., Liu, J.-P., Chen, J. J. (2001). Unconditional exact tests for equivalence or noninferiority for paired binary endpoints. Biometrics 57:478–483.) considered unconditional exact tests for paired binary endpoints. They suggested two statistics one of which is based on the restricted maximum-likelihood estimator. Properties of these statistics and the related tests are treated in this article. 相似文献
6.
Yan Fan 《Journal of applied statistics》2016,43(14):2595-2607
Competing models arise naturally in many research fields, such as survival analysis and economics, when the same phenomenon of interest is explained by different researcher using different theories or according to different experiences. The model selection problem is therefore remarkably important because of its great importance to the subsequent inference; Inference under a misspecified or inappropriate model will be risky. Existing model selection tests such as Vuong's tests [26] and Shi's non-degenerate tests [21] suffer from the variance estimation and the departure of the normality of the likelihood ratios. To circumvent these dilemmas, we propose in this paper an empirical likelihood ratio (ELR) tests for model selection. Following Shi [21], a bias correction method is proposed for the ELR tests to enhance its performance. A simulation study and a real-data analysis are provided to illustrate the performance of the proposed ELR tests. 相似文献
7.
Abstract A generalization of Chauvenet's test (see Bol'shev, L. N. 1969. On tests for rejecting outlying observations. Trudy In-ta prikladnoi Mat. Tblissi Gosudart. univ. 2:159–177. (In Russian); Voinov, V. G., Nikulin, M. N. 1996. Unbaised Estimators and Their Applications. Vol. 2. Kluwer Academic Publishers.) suitable to applied the problem of detecting r outliers in an univariate data set is proposed. In the exponential case, the Chauvenet's test can be used. Various modifications of this test were considered by Bol'shev, Ibrakimov and Khalfina (Ibrakimov, I. A., Khalfina 1978. Some asymptotic results concerning the Chauvenet test. Ter. Veroyatnost. i Primenen. 23(3):593–597.), Greenwood and Nikulin (Greenwood, Nikulin, P. E. 1996. A Guide to Chi-Squared Testing. New York: John Wiley and Sons, Inc.) depending on the choice of the estimation method used: MLE or MVUE. As procedures for testing one outlier in exponential model have been investigated by a number of authors including Chikkagoudar and Kunchur (Chikkagoudar, M. S., Kunchur, S. H. 1983. Distribution of test statistics for multiple outliers in exponential samples. Comm. Stat. Theory. and Meth. 12:2127–2142.), Lewis and Fieller (Lewis, T., Fiellerm N. R. J. 1979. A recursive algorithm for null distribution for outliers : I. Gamma samples. Technometrics 21:371–376.), Likes (Likes, J. 1966. Distribution of Dixon's statistics in the case of an exponential population. Metrika 11:46–54. (91, 96, 136, 198–200, 204, 209, 210).) and Kabe (Kabe, D. G. 1970. Testing outliers from an exponential population. Metrika 15:15–18.); only two types of statistics for testing multiple outliers exist. First is Dixon's while the second is based on the ratio of the sum of the observations suspected to be outliers to the sum of all observations of the sample. In fact, most of these authors have considered a general case of gamma model and the results for exponential model are given a special case. The object of the present communication is to focus on alternative models, namely slippage alternatives (see Barnett, Vic., Toby Lewis 1978. Outlier in Statistical Data. New York: John Wiley and Sons, Inc.) in exponential samples. We propose a statistic different from the well known Dixon's statistic Dr to test for multiple outliers. Distribution of the test based on this new statistic under slippage alternatives is obtained and hence the tables of critical values are given, for various n (size of the sample) and r (the number of outliers). The power of the new test is also calculated, it is compared to the power of the Dixon's statistic (Chikkagoudar, M. S., Kunchur, S. H. 1983. Distribution of test statistics for multiple outliers in exponential samples. Comm. Stat. Theory. and Meth. 12:2127–2142.). Notice that the new statistic based test power is greater the Dixon's statistic based test one. 相似文献
8.
This article suggests random and fixed effects spatial two-stage least squares estimators for the generalized mixed regressive spatial autoregressive panel data model. This extends the generalized spatial panel model of Baltagi et al. (2013) by the inclusion of a spatial lag term. The estimation method utilizes the Generalized Moments method suggested by Kapoor et al. (2007) for a spatial autoregressive panel data model. We derive the asymptotic distributions of these estimators and suggest a Hausman test a la Mutl and Pfaffermayr (2011) based on the difference between these estimators. Monte Carlo experiments are performed to investigate the performance of these estimators as well as the corresponding Hausman test. 相似文献
9.
Trias Wahyuni Rakhmawati Geert Molenberghs Geert Verbeke Christel Faes 《Journal of applied statistics》2017,44(4):620-641
Since the seminal paper by Cook and Weisberg [9], local influence, next to case deletion, has gained popularity as a tool to detect influential subjects and measurements for a variety of statistical models. For the linear mixed model the approach leads to easily interpretable and computationally convenient expressions, not only highlighting influential subjects, but also which aspect of their profile leads to undue influence on the model's fit [17]. Ouwens et al. [24] applied the method to the Poisson-normal generalized linear mixed model (GLMM). Given the model's nonlinear structure, these authors did not derive interpretable components but rather focused on a graphical depiction of influence. In this paper, we consider GLMMs for binary, count, and time-to-event data, with the additional feature of accommodating overdispersion whenever necessary. For each situation, three approaches are considered, based on: (1) purely numerical derivations; (2) using a closed-form expression of the marginal likelihood function; and (3) using an integral representation of this likelihood. Unlike when case deletion is used, this leads to interpretable components, allowing not only to identify influential subjects, but also to study the cause thereof. The methodology is illustrated in case studies that range over the three data types mentioned. 相似文献
10.
《统计学通讯:理论与方法》2013,42(9):1515-1529
ABSTRACT This paper develops corrected score tests for heteroskedastic t regression models, thus generalizing results by Cordeiro, Ferrari and Paula[1] and Cribari-Neto and Ferrari[2] for normal regression models and by Ferrari and Arellano-Valle[3] for homoskedastic t regression models. We present, in matrix notation, Bartlett-type correction formulae to improve score tests in this class of models. The corrected score statistics have a chi-squared distribution to order n ?1, where n is the sample size. We apply our main result to a few special models and present simulation results comparing the performance of the usual score tests and their corrected versions. 相似文献
11.
In this paper, a new survival cure rate model is introduced considering the Yule–Simon distribution [12] to model the number of concurrent causes. We study some properties of this distribution and the model arising when the distribution of the competing causes is the Weibull model. We call this distribution the Weibull–Yule–Simon distribution. Maximum likelihood estimation is conducted for model parameters. A small scale simulation study is conducted indicating satisfactory parameter recovery by the estimation approach. Results are applied to a real data set (melanoma) illustrating the fact that the model proposed can outperform traditional alternative models in terms of model fitting. 相似文献
12.
Liciana V. A. Silveira Enrico A. Colosimo José Raimundo de S. Passos 《统计学通讯:理论与方法》2013,42(15):2659-2666
It is common to have experiments in which it is not possible to observe the exact lifetimes but only the interval where they occur. This sort of data presents a high number of ties and it is called grouped or interval-censored survival data. Regression methods for grouped data are available in the statistical literature. The regression structure considers modeling the probability of a subject's survival past a visit time conditional on his survival at the previous visit. Two approaches are presented: assuming that lifetimes come from (1) a continuous proportional hazards model and (2) a logistic model. However, there may be situations in which none of the models are adequate for a particular data set. This article proposes the generalized log-normal model as an alternative model for discrete survival data. This model was introduced by Chen (1995) and it is extended in this article for grouped survival data. A real example related to a Chagas disease illustrates the proposed model. 相似文献
13.
I. Ardoino E. M. Biganzoli C. Bajdik P. J. Lisboa P. Boracchi F. Ambrogi 《Journal of applied statistics》2012,39(7):1409-1421
In cancer research, study of the hazard function provides useful insights into disease dynamics, as it describes the way in which the (conditional) probability of death changes with time. The widely utilized Cox proportional hazard model uses a stepwise nonparametric estimator for the baseline hazard function, and therefore has a limited utility. The use of parametric models and/or other approaches that enables direct estimation of the hazard function is often invoked. A recent work by Cox et al. [6] has stimulated the use of the flexible parametric model based on the Generalized Gamma (GG) distribution, supported by the development of optimization software. The GG distribution allows estimation of different hazard shapes in a single framework. We use the GG model to investigate the shape of the hazard function in early breast cancer patients. The flexible approach based on a piecewise exponential model and the nonparametric additive hazards model are also considered. 相似文献
14.
Siti Haslinda Mohd Din Marek Molas Jolanda Luime Emmanuel Lesaffre 《Journal of applied statistics》2014,41(8):1627-1644
A variety of statistical approaches have been suggested in the literature for the analysis of bounded outcome scores (BOS). In this paper, we suggest a statistical approach when BOSs are repeatedly measured over time and used as predictors in a regression model. Instead of directly using the BOS as a predictor, we propose to extend the approaches suggested in [16,21,28] to a joint modeling setting. Our approach is illustrated on longitudinal profiles of multiple patients’ reported outcomes to predict the current clinical status of rheumatoid arthritis patients by a disease activities score of 28 joints (DAS28). Both a maximum likelihood as well as a Bayesian approach is developed. 相似文献
15.
This paper presents a new variable weight method, called the singular value decomposition (SVD) approach, for Kohonen competitive learning (KCL) algorithms based on the concept of Varshavsky et al. [18]. Integrating the weighted fuzzy c-means (FCM) algorithm with KCL, in this paper, we propose a weighted fuzzy KCL (WFKCL) algorithm. The goal of the proposed WFKCL algorithm is to reduce the clustering error rate when data contain some noise variables. Compared with the k-means, FCM and KCL with existing variable-weight methods, the proposed WFKCL algorithm with the proposed SVD's weight method provides a better clustering performance based on the error rate criterion. Furthermore, the complexity of the proposed SVD's approach is less than Pal et al. [17], Wang et al. [19] and Hung et al. [9]. 相似文献
16.
《统计学通讯:理论与方法》2013,42(7):1675-1685
ABSTRACT In this article we consider estimating the bivariate survival function observations where one of the components is subject to left truncation and right censoring and the other is subject to right censoring only. Two types of nonparametric estimators are proposed. One is in the form of inverse-probability-weighted average (Satten and Datta, 2001) and the other is a generalization of Dabrowska's 1988 estimator. The two are then compared based on their empirical performances. 相似文献
17.
To deal with multicollinearity problem, the biased estimators with two biasing parameters have recently attracted much research interest. The aim of this article is to compare one of the last proposals given by Yang and Chang (2010) with Liu-type estimator (Liu 2003) and k ? d class estimator (Sakallioglu and Kaciranlar 2008) under the matrix mean squared error criterion. As well as giving these comparisons theoretically, we support the results with the extended simulation studies and real data example, which show the advantages of the proposal given by Yang and Chang (2010) over the other proposals with increasing multicollinearity level. 相似文献
18.
Analysis of discrete lifetime data under middle-censoring and in the presence of covariates 总被引:1,自引:0,他引:1
S. Rao Jammalamadaka 《Journal of applied statistics》2015,42(4):905-913
‘Middle censoring’ is a very general censoring scheme where the actual value of an observation in the data becomes unobservable if it falls inside a random interval (L, R) and includes both left and right censoring. In this paper, we consider discrete lifetime data that follow a geometric distribution that is subject to middle censoring. Two major innovations in this paper, compared to the earlier work of Davarzani and Parsian [3], include (i) an extension and generalization to the case where covariates are present along with the data and (ii) an alternate approach and proofs which exploit the simple relationship between the geometric and the exponential distributions, so that the theory is more in line with the work of Iyer et al. [6]. It is also demonstrated that this kind of discretization of life times gives results that are close to the original data involving exponential life times. Maximum likelihood estimation of the parameters is studied for this middle-censoring scheme with covariates and their large sample distributions discussed. Simulation results indicate how well the proposed estimation methods work and an illustrative example using time-to-pregnancy data from Baird and Wilcox [1] is included. 相似文献
19.
In this article, a generalized Lévy model is proposed and its parameters are estimated in high-frequency data settings. An infinitesimal generator of Lévy processes is used to study the asymptotic properties of the drift and volatility estimators. They are consistent asymptotically and are independent of other parameters making them better than those in Chen et al. (2010). The estimators proposed here also have fast convergence rates and are simple to implement. 相似文献
20.
This article proposes an asymptotic expansion for the Studentized linear discriminant function using two-step monotone missing samples under multivariate normality. The asymptotic expansions related to discriminant function have been obtained for complete data under multivariate normality. The result derived by Anderson (1973) plays an important role in deciding the cut-off point that controls the probabilities of misclassification. This article provides an extension of the result derived by Anderson (1973) in the case of two-step monotone missing samples under multivariate normality. Finally, numerical evaluations by Monte Carlo simulations were also presented. 相似文献