共查询到20条相似文献,搜索用时 31 毫秒
1.
Siti Haslinda Mohd Din Marek Molas Jolanda Luime Emmanuel Lesaffre 《Journal of applied statistics》2014,41(8):1627-1644
A variety of statistical approaches have been suggested in the literature for the analysis of bounded outcome scores (BOS). In this paper, we suggest a statistical approach when BOSs are repeatedly measured over time and used as predictors in a regression model. Instead of directly using the BOS as a predictor, we propose to extend the approaches suggested in [16,21,28] to a joint modeling setting. Our approach is illustrated on longitudinal profiles of multiple patients’ reported outcomes to predict the current clinical status of rheumatoid arthritis patients by a disease activities score of 28 joints (DAS28). Both a maximum likelihood as well as a Bayesian approach is developed. 相似文献
2.
This paper presents a new variable weight method, called the singular value decomposition (SVD) approach, for Kohonen competitive learning (KCL) algorithms based on the concept of Varshavsky et al. [18]. Integrating the weighted fuzzy c-means (FCM) algorithm with KCL, in this paper, we propose a weighted fuzzy KCL (WFKCL) algorithm. The goal of the proposed WFKCL algorithm is to reduce the clustering error rate when data contain some noise variables. Compared with the k-means, FCM and KCL with existing variable-weight methods, the proposed WFKCL algorithm with the proposed SVD's weight method provides a better clustering performance based on the error rate criterion. Furthermore, the complexity of the proposed SVD's approach is less than Pal et al. [17], Wang et al. [19] and Hung et al. [9]. 相似文献
3.
Trias Wahyuni Rakhmawati Geert Molenberghs Geert Verbeke Christel Faes 《Journal of applied statistics》2017,44(4):620-641
Since the seminal paper by Cook and Weisberg [9], local influence, next to case deletion, has gained popularity as a tool to detect influential subjects and measurements for a variety of statistical models. For the linear mixed model the approach leads to easily interpretable and computationally convenient expressions, not only highlighting influential subjects, but also which aspect of their profile leads to undue influence on the model's fit [17]. Ouwens et al. [24] applied the method to the Poisson-normal generalized linear mixed model (GLMM). Given the model's nonlinear structure, these authors did not derive interpretable components but rather focused on a graphical depiction of influence. In this paper, we consider GLMMs for binary, count, and time-to-event data, with the additional feature of accommodating overdispersion whenever necessary. For each situation, three approaches are considered, based on: (1) purely numerical derivations; (2) using a closed-form expression of the marginal likelihood function; and (3) using an integral representation of this likelihood. Unlike when case deletion is used, this leads to interpretable components, allowing not only to identify influential subjects, but also to study the cause thereof. The methodology is illustrated in case studies that range over the three data types mentioned. 相似文献
4.
This article suggests random and fixed effects spatial two-stage least squares estimators for the generalized mixed regressive spatial autoregressive panel data model. This extends the generalized spatial panel model of Baltagi et al. (2013) by the inclusion of a spatial lag term. The estimation method utilizes the Generalized Moments method suggested by Kapoor et al. (2007) for a spatial autoregressive panel data model. We derive the asymptotic distributions of these estimators and suggest a Hausman test a la Mutl and Pfaffermayr (2011) based on the difference between these estimators. Monte Carlo experiments are performed to investigate the performance of these estimators as well as the corresponding Hausman test. 相似文献
5.
Guangyu Mao 《Econometric Reviews》2018,37(5):491-506
This article is concerned with sphericity test for the two-way error components panel data model. It is found that the John statistic and the bias-corrected LM statistic recently developed by Baltagi et al. (2011)Baltagi et al. (2012, which are based on the within residuals, are not helpful under the present circumstances even though they are in the one-way fixed effects model. However, we prove that when the within residuals are properly transformed, the resulting residuals can serve to construct useful statistics that are similar to those of Baltagi et al. (2011)Baltagi et al. (2012). Simulation results show that the newly proposed statistics perform well under the null hypothesis and several typical alternatives. 相似文献
6.
Analysis of discrete lifetime data under middle-censoring and in the presence of covariates 总被引:1,自引:0,他引:1
S. Rao Jammalamadaka 《Journal of applied statistics》2015,42(4):905-913
‘Middle censoring’ is a very general censoring scheme where the actual value of an observation in the data becomes unobservable if it falls inside a random interval (L, R) and includes both left and right censoring. In this paper, we consider discrete lifetime data that follow a geometric distribution that is subject to middle censoring. Two major innovations in this paper, compared to the earlier work of Davarzani and Parsian [3], include (i) an extension and generalization to the case where covariates are present along with the data and (ii) an alternate approach and proofs which exploit the simple relationship between the geometric and the exponential distributions, so that the theory is more in line with the work of Iyer et al. [6]. It is also demonstrated that this kind of discretization of life times gives results that are close to the original data involving exponential life times. Maximum likelihood estimation of the parameters is studied for this middle-censoring scheme with covariates and their large sample distributions discussed. Simulation results indicate how well the proposed estimation methods work and an illustrative example using time-to-pregnancy data from Baird and Wilcox [1] is included. 相似文献
7.
This article considers constructing confidence intervals for the date of a structural break in linear regression models. Using extensive simulations, we compare the performance of various procedures in terms of exact coverage rates and lengths of the confidence intervals. These include the procedures of Bai (1997) based on the asymptotic distribution under a shrinking shift framework, Elliott and Müller (2007) based on inverting a test locally invariant to the magnitude of break, Eo and Morley (2015) based on inverting a likelihood ratio test, and various bootstrap procedures. On the basis of achieving an exact coverage rate that is closest to the nominal level, Elliott and Müller's (2007) approach is by far the best one. However, this comes with a very high cost in terms of the length of the confidence intervals. When the errors are serially correlated and dealing with a change in intercept or a change in the coefficient of a stationary regressor with a high signal-to-noise ratio, the length of the confidence interval increases and approaches the whole sample as the magnitude of the change increases. The same problem occurs in models with a lagged dependent variable, a common case in practice. This drawback is not present for the other methods, which have similar properties. Theoretical results are provided to explain the drawbacks of Elliott and Müller's (2007) method. 相似文献
8.
Buffered Autoregressive Models With Conditional Heteroscedasticity: An Application to Exchange Rates
This article introduces a new model called the buffered autoregressive model with generalized autoregressive conditional heteroscedasticity (BAR-GARCH). The proposed model, as an extension of the BAR model in Li et al. (2015), can capture the buffering phenomena of time series in both the conditional mean and variance. Thus, it provides us a new way to study the nonlinearity of time series. Compared with the existing AR-GARCH and threshold AR-GARCH models, an application to several exchange rates highlights the importance of the BAR-GARCH model. 相似文献
9.
Abhik Ghosh 《Journal of applied statistics》2015,42(9):2056-2072
The density power divergence (DPD) measure, defined in terms of a single parameter α, has proved to be a popular tool in the area of robust estimation [1]. Recently, Ghosh and Basu [5] rigorously established the asymptotic properties of the MDPDEs in case of independent non-homogeneous observations. In this paper, we present an extensive numerical study to describe the performance of the method in the case of linear regression, the most common setup under the case of non-homogeneous data. In addition, we extend the existing methods for the selection of the optimal robustness tuning parameter from the case of independent and identically distributed (i.i.d.) data to the case of non-homogeneous observations. Proper selection of the tuning parameter is critical to the appropriateness of the resulting analysis. The selection of the optimal robustness tuning parameter is explored in the context of the linear regression problem with an extensive numerical study involving real and simulated data. 相似文献
10.
Firoozeh Rivaz 《Journal of applied statistics》2016,43(7):1335-1348
This paper deals with the problem of increasing air pollution monitoring stations in Tehran city for efficient spatial prediction. As the data are multivariate and skewed, we introduce two multivariate skew models through developing the univariate skew Gaussian random field proposed by Zareifard and Jafari Khaledi [21]. These models provide extensions of the linear model of coregionalization for non-Gaussian data. In the Bayesian framework, the optimal network design is found based on the maximum entropy criterion. A Markov chain Monte Carlo algorithm is developed to implement posterior inference. Finally, the applicability of two proposed models is demonstrated by analyzing an air pollution data set. 相似文献
11.
We propose a Bayesian approach for inference in a dynamic disequilibrium model. To circumvent the difficulties raised by the Maddala and Nelson (1974) specification in the dynamic case, we analyze a dynamic extended version of the disequilibrium model of Ginsburgh et al. (1980). We develop a Gibbs sampler based on the simulation of the missing observations. The feasibility of the approach is illustrated by an empirical analysis of the Polish credit market, for which we conduct a specification search using the posterior deviance criterion of Spiegelhalter et al. (2002). 相似文献
12.
Yan Fan 《Journal of applied statistics》2016,43(14):2595-2607
Competing models arise naturally in many research fields, such as survival analysis and economics, when the same phenomenon of interest is explained by different researcher using different theories or according to different experiences. The model selection problem is therefore remarkably important because of its great importance to the subsequent inference; Inference under a misspecified or inappropriate model will be risky. Existing model selection tests such as Vuong's tests [26] and Shi's non-degenerate tests [21] suffer from the variance estimation and the departure of the normality of the likelihood ratios. To circumvent these dilemmas, we propose in this paper an empirical likelihood ratio (ELR) tests for model selection. Following Shi [21], a bias correction method is proposed for the ELR tests to enhance its performance. A simulation study and a real-data analysis are provided to illustrate the performance of the proposed ELR tests. 相似文献
13.
Coppi et al. [7] applied Yang and Wu's [20] idea to propose a possibilistic k-means (PkM) clustering algorithm for LR-type fuzzy numbers. The memberships in the objective function of PkM no longer need to satisfy the constraint in fuzzy k-means that of a data point across classes sum to one. However, the clustering performance of PkM depends on the initializations and weighting exponent. In this paper, we propose a robust clustering method based on a self-updating procedure. The proposed algorithm not only solves the initialization problems but also obtains a good clustering result. Several numerical examples also demonstrate the effectiveness and accuracy of the proposed clustering method, especially the robustness to initial values and noise. Finally, three real fuzzy data sets are used to illustrate the superiority of this proposed algorithm. 相似文献
14.
In this paper we propose a new lifetime model for multivariate survival data in presence of surviving fractions and examine some of its properties. Its genesis is based on situations in which there are m types of unobservable competing causes, where each cause is related to a time of occurrence of an event of interest. Our model is a multivariate extension of the univariate survival cure rate model proposed by Rodrigues et al. [37]. The inferential approach exploits the maximum likelihood tools. We perform a simulation study in order to verify the asymptotic properties of the maximum likelihood estimators. The simulation study also focus on size and power of the likelihood ratio test. The methodology is illustrated on a real data set on customer churn data. 相似文献
15.
The Significance Analysis of Microarrays (SAM; Tusher et al., 2001) method is widely used in analyzing gene expression data while controlling the FDR by using resampling-based procedure in the microarray setting. One of the main components of the SAM procedure is the adjustment of the test statistic. The introduction of the fudge factor to the test statistic aims at deflating the large value of test statistics due to the small standard error of gene-expression. Lin et al. (2008) pointed out that the fudge factor does not effectively improve the power and the control of the FDR as compared to the SAM procedure without the fudge factor in the presence of small variance genes. Motivated by the simulation results presented in Lin et al. (2008), in this article, we extend our study to compare several methods for choosing the fudge factor in the modified t-type test statistics and use simulation studies to investigate the power and the control of the FDR of the considered methods. 相似文献
16.
Artūras Juodis 《Econometric Reviews》2018,37(6):650-693
This article considers estimation of Panel Vector Autoregressive Models of order 1 (PVAR(1)) with focus on fixed T consistent estimation methods in First Differences (FD) with additional strictly exogenous regressors. Additional results for the Panel FD ordinary least squares (OLS) estimator and the FDLS type estimator of Han and Phillips (2010) are provided. Furthermore, we simplify the analysis of Binder et al. (2005) by providing additional analytical results and extend the original model by taking into account possible cross-sectional heteroscedasticity and presence of strictly exogenous regressors. We show that in the three wave panel the log-likelihood function of the unrestricted Transformed Maximum Likelihood (TML) estimator might violate the global identification assumption. The finite-sample performance of the analyzed methods is investigated in a Monte Carlo study. 相似文献
17.
Wen-Liang Hung Shou-Jen Chang-Chien Miin-Shen Yang 《Journal of applied statistics》2015,42(10):2220-2232
This paper proposes an intuitive clustering algorithm capable of automatically self-organizing data groups based on the original data structure. Comparisons between the propopsed algorithm and EM [1] and spherical k-means [7] algorithms are given. These numerical results show the effectiveness of the proposed algorithm, using the correct classification rate and the adjusted Rand index as evaluation criteria [5,6]. In 1995, Mayor and Queloz announced the detection of the first extrasolar planet (exoplanet) around a Sun-like star. Since then, observational efforts of astronomers have led to the detection of more than 1000 exoplanets. These discoveries may provide important information for understanding the formation and evolution of planetary systems. The proposed clustering algorithm is therefore used to study the data gathered on exoplanets. Two main implications are also suggested: (1) there are three major clusters, which correspond to the exoplanets in the regimes of disc, ongoing tidal and tidal interactions, respectively, and (2) the stellar metallicity does not play a key role in exoplanet migration. 相似文献
18.
I. Ardoino E. M. Biganzoli C. Bajdik P. J. Lisboa P. Boracchi F. Ambrogi 《Journal of applied statistics》2012,39(7):1409-1421
In cancer research, study of the hazard function provides useful insights into disease dynamics, as it describes the way in which the (conditional) probability of death changes with time. The widely utilized Cox proportional hazard model uses a stepwise nonparametric estimator for the baseline hazard function, and therefore has a limited utility. The use of parametric models and/or other approaches that enables direct estimation of the hazard function is often invoked. A recent work by Cox et al. [6] has stimulated the use of the flexible parametric model based on the Generalized Gamma (GG) distribution, supported by the development of optimization software. The GG distribution allows estimation of different hazard shapes in a single framework. We use the GG model to investigate the shape of the hazard function in early breast cancer patients. The flexible approach based on a piecewise exponential model and the nonparametric additive hazards model are also considered. 相似文献
19.
Biao Zhang 《Econometric Reviews》2016,35(2):201-231
This paper discusses the estimation of average treatment effects in observational causal inferences. By employing a working propensity score and two working regression models for treatment and control groups, Robins et al. (1994, 1995) introduced the augmented inverse probability weighting (AIPW) method for estimation of average treatment effects, which extends the inverse probability weighting (IPW) method of Horvitz and Thompson (1952); the AIPW estimators are locally efficient and doubly robust. In this paper, we study a hybrid of the empirical likelihood method and the method of moments by employing three estimating functions, which can generate estimators for average treatment effects that are locally efficient and doubly robust. The proposed estimators of average treatment effects are efficient for the given choice of three estimating functions when the working propensity score is correctly specified, and thus are more efficient than the AIPW estimators. In addition, we consider a regression method for estimation of the average treatment effects when working regression models for both the treatment and control groups are correctly specified; the asymptotic variance of the resulting estimator is no greater than the semiparametric variance bound characterized by the theory of Robins et al. (1994, 1995). Finally, we present a simulation study to compare the finite-sample performance of various methods with respect to bias, efficiency, and robustness to model misspecification. 相似文献
20.
The objective of this paper is to study U-type designs for Bayesian non parametric response surface prediction under correlated errors. The asymptotic Bayes criterion is developed in terms of the asymptotic approach of Mitchell et al. (1994) for a more general covariance kernel proposed by Chatterjee and Qin (2011). A relationship between the asymptotic Bayes criterion and other criteria, such as orthogonality and aberration, is then developed. A lower bound for the criterion is also obtained, and numerical results show that this lower bound is tight. The established results generalize those of Yue et al. (2011) from symmetrical case to asymmetrical U-type designs. 相似文献