共查询到20条相似文献,搜索用时 31 毫秒
1.
Tony Vangeneugden Geert Molenberghs Geert Verbeke Clarice G.B. Demétrio 《统计学通讯:理论与方法》2014,43(19):4164-4178
In hierarchical data settings, be it of a longitudinal, spatial, multi-level, clustered, or otherwise repeated nature, often the association between repeated measurements attracts at least part of the scientific interest. Quantifying the association frequently takes the form of a correlation function, including but not limited to intraclass correlation. Vangeneugden et al. (2010) derived approximate correlation functions for longitudinal sequences of general data type, Gaussian and non-Gaussian, based on generalized linear mixed-effects models. Here, we consider the extended model family proposed by Molenberghs et al. (2010). This family flexibly accommodates data hierarchies, intra-sequence correlation, and overdispersion. The family allows for closed-form means, variance functions, and correlation function, for a variety of outcome types and link functions. Unfortunately, for binary data with logit link, closed forms cannot be obtained. This is in contrast with the probit link, for which such closed forms can be derived. It is therefore that we concentrate on the probit case. It is of interest, not only in its own right, but also as an instrument to approximate the logit case, thanks to the well-known probit-logit ‘conversion.’ Next to the general situation, some important special cases such as exchangeable clustered outcomes receive attention because they produce insightful expressions. The closed-form expressions are contrasted with the generic approximate expressions of Vangeneugden et al. (2010) and with approximations derived for the so-called logistic-beta-normal combined model. A simulation study explores performance of the method proposed. Data from a schizophrenia trial are analyzed and correlation functions derived. 相似文献
2.
This article proposes various Searls-type ratio imputation methods (STRIM) on the lines of Ahmed et al. (2006). It is a well-known fact that the optimal ratio type estimator attains the MSE of regression estimator (or optimal difference estimator) but while using Searls-type transformation (STT) (Searls (1964)) this may not always happen. These STRIM are shown to perform better than the imputation procedures of Ahmed et al. (2006). The STRIM may even outperform the Searls type difference imputation methods (STDIM) proposed by us in our earlier work, Bhushan and Pandey (2016). This study is concluded with the numerical study along with the theoretical comparison. 相似文献
3.
By using the medical data analyzed by Kang et al. (2007), a Bayesian procedure is applied to obtain control limits for the coefficient of variation. Reference and probability matching priors are derived for a common coefficient of variation across the range of sample values. By simulating the posterior predictive density function of a future coefficient of variation, it is shown that the control limits are effectively identical to those obtained by Kang et al. (2007) for the specific dataset they used. This article illustrates the flexibility and unique features of the Bayesian simulation method for obtaining posterior distributions, predictive intervals, and run-lengths in the case of the coefficient of variation. A simulation study shows that the 95% Bayesian confidence intervals for the coefficient of variation have the correct frequentist coverage. 相似文献
4.
Simard et al. [16 17] proposed a transformation distance called “tangent distance” (TD) which can make pattern recognition be efficient. The key idea is to construct a distance measure which is invariant with respect to some chosen transformations. In this research, we provide a method using adaptive TD based on an idea inspired by “discriminant adaptive nearest neighbor” [7]. This method is relatively easy compared with many other complicated ones. A real handwritten recognition data set is used to illustrate our new method. Our results demonstrate that the proposed method gives lower classification error rates than those by standard implementation of neural networks and support vector machines and is as good as several other complicated approaches. 相似文献
5.
The probability matching prior for linear functions of Poisson parameters is derived. A comparison is made between the confidence intervals obtained by Stamey and Hamilton (2006), and the intervals derived by us when using the Jeffreys’ and probability matching priors. The intervals obtained from the Jeffreys’ prior are in some cases fiducial intervals (Krishnamoorthy and Lee, 2010). A weighted Monte Carlo method is used for the probability matching prior. The power and size of the test, using Bayesian methods, is compared to tests used by Krishnamoorthy and Thomson (2004). The Jeffreys’, probability matching and two other priors are used. 相似文献
6.
Mi-Hwa Ko 《统计学通讯:理论与方法》2018,47(3):671-680
In this article, we study the complete convergence for sequences of coordinatewise asymptotically negatively associated random vectors in Hilbert spaces. We also investigate that some related results for coordinatewise negatively associated random vectors in Huan, Quang, and Thuan (2014) still hold under this concept. 相似文献
7.
This article considers several estimators for estimating the ridge parameter k for multinomial logit model based on the work of Khalaf and Shukur (2005), Alkhamisi et al. (2006), and Muniz et al. (2012). The mean square error (MSE) is considered as the performance criterion. A simulation study has been conducted to compare the performance of the estimators. Based on the simulation study we found that increasing the correlation between the independent variables and the number of regressors has negative effect on the MSE. However, when the sample size increases the MSE decreases even when the correlation between the independent variables is large. Based on the minimum MSE criterion some useful estimators for estimating the ridge parameter k are recommended for the practitioners. 相似文献
8.
When a sufficient correlation between the study variable and the auxiliary variable exists, the ranks of the auxiliary variable are also correlated with the study variable, and thus, these ranks can be used as an effective tool in increasing the precision of an estimator. In this paper, we propose a new improved estimator of the finite population mean that incorporates the supplementary information in forms of: (i) the auxiliary variable and (ii) ranks of the auxiliary variable. Mathematical expressions for the bias and the mean-squared error of the proposed estimator are derived under the first order of approximation. The theoretical and empirical studies reveal that the proposed estimator always performs better than the usual mean, ratio, product, exponential-ratio and -product, classical regression estimators, and Rao (1991), Singh et al. (2009), Shabbir and Gupta (2010), Grover and Kaur (2011, 2014) estimators. 相似文献
9.
Recently, Abbasnejad et al. (2010) proposed a measure of uncertainty based on survival function, called the survival entropy of order α. A dynamic form of the survival entropy of order α is also proposed by them. In this paper, we derive the weighted form of these measures. The properties of the new measures are also discussed. 相似文献
10.
《统计学通讯:理论与方法》2013,42(6):1019-1030
ABSTRACT In this paper, we present a modified Kelly and Rice method for testing synergism. This approach is consistent with Berenbaum's1-3 framework for additivity. The delta method[4] is applied to obtain the estimated variance for the predicted additivity proportion. A Monte Carlo simulation study for the evaluation of the method's performance, i.e., global overall tests for synergism, is also discussed. Kelly and Rice[5] do not provide a correct test statistic because the variance is underestimated. Hence, the performance of the Kelly–Rice[5] method is generally anti-conservative, based on the simulation findings. In addition, the overall test of synergism with χ2(r) from the modified Kelly and Rice method for larger sample sizes is better than that with χ2(1) from the modified Kelly and Rice method. 相似文献
11.
Lindeman et al. [12] provide a unique solution to the relative importance of correlated predictors in multiple regression by averaging squared semi-partial correlations obtained for each predictor across all p! orderings. In this paper, we propose a series of predictor sensitivity statistics that complement the variance decomposition procedure advanced by Lindeman et al. [12]. First, we detail the logic of averaging over orderings as a technique of variance partitioning. Second, we assess predictors by conditional dominance analysis, a qualitative procedure designed to overcome defects in the Lindeman et al. [12] variance decomposition solution. Third, we introduce a suite of indices to assess the sensitivity of a predictor to model specification, advancing a series of sensitivity-adjusted contribution statistics that allow for more definite quantification of predictor relevance. Fourth, we describe the analytic efficiency of our proposed technique against the Budescu conditional dominance solution to the uneven contribution of predictors across all p! orderings. 相似文献
12.
This article studies the heavy-traffic (HT) behavior of queueing networks with a single roving server. External customers arrive at the queues according to independent renewal processes and after completing service, a customer either leaves the system or is routed to another queue. This type of customer routing in queueing networks arises very naturally in many application areas (in production systems, computer- and communication networks, maintenance, etc.). In these networks, the single most important characteristic of the system performance is oftentimes the path time, i.e., the total time spent in the system by an arbitrary customer traversing a specific path. The current article presents the first HT asymptotic for the path-time distribution in queueing networks with a roving server under general renewal arrivals. In particular, we provide a strong conjecture for the system’s behavior under HT extending the conjecture of Coffman et al.[8,9] to the roving server setting of the current article. By combining this result with novel light-traffic asymptotics, we derive an approximation of the mean path time for arbitrary values of the load and renewal arrivals. This approximation is not only highly accurate for a wide range of parameter settings, but is also exact in various limiting cases. 相似文献
13.
Vikas Kumar 《统计学通讯:理论与方法》2017,46(17):8343-8354
In this article, the concept of cumulative residual entropy (CRE) given by Rao et al. (2004) is extended to Tsallis entropy function and dynamic version, both residual and past of it. We study some properties and characterization results for these generalized measures. In addition, we provide some characterization results of the first-order statistic based on the Tsallis survival entropy. 相似文献
14.
In this article, we study the moment-based test procedure for a mixture distribution for the Natural exponential family with quadratic variance functions (NEF-QVF) family proposed by Ning et al. (2009b) in the small sample size scenario. We derive the approximation for the null distribution of the test statistic by the Edgeworth expansion. The simulations are conducted for a binomial mixture distribution, which includes the situation corresponding to the detection of the linkage in the genetic analysis, with different sample sizes and family sizes at various significance levels. The simulation results show that our test performs reasonably well. We also apply the proposed method to the real clinical data to verify the significant difference between two drug treatments. The critical values associated with a binomial mixture distribution are also provided. 相似文献
15.
In this article, we consider two different shared frailty regression models under the assumption of Gompertz as baseline distribution. Mostly assumption of gamma distribution is considered for frailty distribution. To compare the results with gamma frailty model, we consider the inverse Gaussian shared frailty model also. We compare these two models to a real life bivariate survival data set of acute leukemia remission times (Freireich et al., 1963). Analysis is performed using Markov Chain Monte Carlo methods. Model comparison is made using Bayesian model selection criterion and a well-fitted model is suggested for the acute leukemia data. 相似文献
16.
In this article, we have evaluated the performance of different forecasters and tested association between their performances for different pairs of variables. We have used three data sets of track records of professional U.S. economic forecasters participating in the Blue Chip consensus forecasting service (the data sets contain the root mean square errors (RMSE) of different forecasters for different years). To evaluate the performance of forecasters we have covered three well-known tests, namely the usual F test (cf. Fisher (1923)), Kruskal Wallis test (cf. Kruskal and Wallis (1952)), and Extension of Median test (cf. Daniel (1990)). To test the association between the forecaster's performances for different pairs of variables, we have considered Gini mean correlation coefficient rg1 (cf. Yitzhaki, S., and Olkin, I. (1991) and Yitzhaki (2003)), Modified rank correlation coefficient (cf. Zimmerman (1994)) and three modifications of Spearman rank correlation coefficient. We have observed that different forecasters do not necessarily offer same average performance. Moreover, an evidence of association between two criteria does not always lead us reaching at the same decision. The outcomes of the study may help the practitioners in selecting the best forecaster(s) for policymaking purposes. 相似文献
17.
Fernanda B. Rizzato Roseli A. Leandro Clarice G.B. Demétrio 《Journal of applied statistics》2016,43(11):2085-2109
In this paper, we consider a model for repeated count data, with within-subject correlation and/or overdispersion. It extends both the generalized linear mixed model and the negative-binomial model. This model, proposed in a likelihood context [17,18] is placed in a Bayesian inferential framework. An important contribution takes the form of Bayesian model assessment based on pivotal quantities, rather than the often less adequate DIC. By means of a real biological data set, we also discuss some Bayesian model selection aspects, using a pivotal quantity proposed by Johnson [12]. 相似文献
18.
Housila P. Singh 《统计学通讯:理论与方法》2013,42(15):2718-2730
This article addresses the problem of estimating of finite population variance using auxiliary information in simple random sampling. A ratio-cum-difference type class of estimators for population variance has been suggested with its properties under large sample approximation. It has been shown that the suggested class of estimators is more efficient than usual unbiased, difference, Das and Tripathi (1978), Isaki (1983), Singh et al. (1988), Kadilar and Cingi (2006), and other estimators/classes of estimators. In addition, we support this theoretical result with the aid of a empirical study. 相似文献
19.
ABSTRACTIn the present paper, we discuss algorithms of record generation when records are taken from a normal population. We propose three new generation algorithms, compare their efficiency and find the most efficient algorithm (Algorithm 2.1). We then compare these algorithms with known generation algorithms presented in the work of Balakrishnan, So, and Zhu (2016). 相似文献
20.
The Significance Analysis of Microarrays (SAM; Tusher et al., 2001) method is widely used in analyzing gene expression data while controlling the FDR by using resampling-based procedure in the microarray setting. One of the main components of the SAM procedure is the adjustment of the test statistic. The introduction of the fudge factor to the test statistic aims at deflating the large value of test statistics due to the small standard error of gene-expression. Lin et al. (2008) pointed out that the fudge factor does not effectively improve the power and the control of the FDR as compared to the SAM procedure without the fudge factor in the presence of small variance genes. Motivated by the simulation results presented in Lin et al. (2008), in this article, we extend our study to compare several methods for choosing the fudge factor in the modified t-type test statistics and use simulation studies to investigate the power and the control of the FDR of the considered methods. 相似文献