共查询到20条相似文献,搜索用时 531 毫秒
1.
Hong Zhang 《统计学通讯:理论与方法》2013,42(7):1228-1241
Sa and Edwards (1993) first proposed the Multiple Comparisons with a Control problem in Response Surface Methodology. They provided an exact solution for one predictor variable and a conservative solution when number of predictor variables is more than one. Merchant et al. (1998) improved the solution for the latter case. This article improves Merchant et al.'s solution for the case of rotatable designs in two predictor variables. 相似文献
2.
Sharma (1977) and Aggarwal et al. (2006) considered non circular construction of first- and second-order balanced repeated measurements designs. Sharma et al. (2002) constructed circular first- and second-order balanced repeated measurements designs only for a class with parameters (v, p = 3n, n = v 2) and also showed its universal optimality. In this article, we consider circular construction of first- and second-order balanced repeated measurements designs and strongly balanced repeated measurements designs by using the method of cyclic shifts. Some new circular designs with parameters (v, p, n) for cases p = v, p < v and p > v are given. 相似文献
3.
In this article, we find designs insensitive to the presence of an outlier in a diallel cross design setup for estimating a complete set of orthonormal contrasts among the effects of the general combining abilities of a set of parental lines. The criterion of robustness, suggested by Mandal (1989) in block design setup and used by Biswas (2012) in treatment-control setup, is adapted here. Complete diallel cross designs, suggested by Gupta and Kageyama (1994), and partial diallel cross designs, suggested by Gupta et al. (1995) and Mukerjee (1997), are found to be robust under certain conditions. 相似文献
4.
The Significance Analysis of Microarrays (SAM; Tusher et al., 2001) method is widely used in analyzing gene expression data while controlling the FDR by using resampling-based procedure in the microarray setting. One of the main components of the SAM procedure is the adjustment of the test statistic. The introduction of the fudge factor to the test statistic aims at deflating the large value of test statistics due to the small standard error of gene-expression. Lin et al. (2008) pointed out that the fudge factor does not effectively improve the power and the control of the FDR as compared to the SAM procedure without the fudge factor in the presence of small variance genes. Motivated by the simulation results presented in Lin et al. (2008), in this article, we extend our study to compare several methods for choosing the fudge factor in the modified t-type test statistics and use simulation studies to investigate the power and the control of the FDR of the considered methods. 相似文献
5.
《统计学通讯:理论与方法》2012,41(16-17):3198-3210
The randomized response (RR) technique with two decks of cards proposed by Odumade and Singh (2009) can always be made more efficient than the RR techniques proposed by Warner (1965), Mangat and Singh (1990), and Mangat (1994) by adjusting the proportion of cards in the decks. The proposed method of Odumade and Singh (2009) is limited to simple random sampling with replacement (SRSWR) sampling only. In this article, generalization of Odumade and Singh strategy is provided for complex survey designs and a wider class of estimators. The results of Odumade and Singh (2009) can be derived from the proposed method as a special case. 相似文献
6.
N. K. Mandal 《统计学通讯:理论与方法》2013,42(10):1565-1575
In a mixture experiment the measured response is assumed to depend only on the relative proportion of ingredients or components present in the mixture. Scheffe (1958, 1963) first systematically considered this problem and introduced different models and designs suitable in such situations. Optimum designs for the estimation of parameters of different mixture models are available in the literature. The problem of estimating the optimum proportion of mixture components is of great practical importance. Pal and Mandal (2006, 2007) attempted to find a solution to this problem by adopting a pseudo-Bayesian approach and using the trace criterion. Subsequently, Pal and Mandal (2008) solved the problem using minimax criterion. In this article, the deficiency criterion due to Chatterjee and Mandal (1981) has been used as a measure for comparing the performance of competing designs. 相似文献
7.
In experimental design for response surface analysis, it is sometimes of interest to estimate the difference of responses at two points. If differences at points close together are involved, the design that reliably estimates the slope of the response surface is important. In particular, Hader and Park (1978) suggested the concept of slope-rotatability and studied slope rotatable central composite designs. Until now, many response surface designs including central composite designs have been suggested for fitting second order response surface models. However, we often need to fit third-order polynomial regression models. In this article, we suggest extended central composite designs (ECCDs) to fit third-order models and find the necessary and sufficient conditions for slope-rotatability over all directions in the third-order polynomial models. 相似文献
8.
Shesh N. Rai Jianmin Pan Xiaobin Yuan Jianguo Sun Melissa M. Hudson Deo K. Srivastava 《统计学通讯:理论与方法》2013,42(17):3117-3133
New drug discovery in the pediatrics has dramatically improved survival, but with long- term adverse events. This motivates the examination of adverse outcomes such as long-term toxicity in a phase IV trial. An ideal approach to monitor long-term toxicity is to systematically follow the survivors, which is generally not feasible. Instead, cross-sectional surveys are conducted in Hudson et al. (2007), with one of the objectives to estimate the cumulative incidence rates along with specific interest in fixed-term (5 or 10 year) rates. We present inference procedures based on current status data to our motivating example with very interesting findings. 相似文献
9.
Soo Hak Sung 《统计学通讯:理论与方法》2013,42(9):1663-1674
A complete convergence theorem for an array of rowwise independent random variables was established by Sung et al. (2005). This result has been generalized and extended by Kruglov et al. (2006) and Chen et al. (2007). In this article, we extend the results of Sung et al. (2005), Kruglov et al. (2006), and Chen et al. (2007) to an array of dependent random variables satisfying Hoffmann-Jørgensen type inequalities. 相似文献
10.
Viswanathan Ramakrishnan 《统计学通讯:模拟与计算》2013,42(3):405-418
In many genetic analyses of dichotomous twin data, odds ratios have been used to test hypotheses on heritability and shared common environment effects of a given disease (Lichtenstein et al., 2000; Ahlbom et al., 1997; Ramakrishnan et al., 1992, 4). However, estimates of these two effects have not been dealt with in the literature. In epidemiology, the attributable fraction (AF), a function of the odds ratio and the prevalence of the risk factor has been used to describe the contribution of a risk factor to a disease in a given population (Leviton, 1973). In this article, we adapt the AF to quantify the heritability and the shared common environment. Twin data on cancer, gallstone disease and phobia are used to illustrate the applicability of the AF estimate as a measure of heritability. 相似文献
11.
Tony Vangeneugden Geert Verbeke Clarice G.B. Demétrio 《Journal of applied statistics》2011,38(2):215-232
Vangeneugden et al. [15] derived approximate correlation functions for longitudinal sequences of general data type, Gaussian and non-Gaussian, based on generalized linear mixed-effects models (GLMM). Their focus was on binary sequences, as well as on a combination of binary and Gaussian sequences. Here, we focus on the specific case of repeated count data, important in two respects. First, we employ the model proposed by Molenberghs et al. [13], which generalizes at the same time the Poisson-normal GLMM and the conventional overdispersion models, in particular the negative-binomial model. The model flexibly accommodates data hierarchies, intra-sequence correlation, and overdispersion. Second, means, variances, and joint probabilities can be expressed in closed form, allowing for exact intra-sequence correlation expressions. Next to the general situation, some important special cases such as exchangeable clustered outcomes are considered, producing insightful expressions. The closed-form expressions are contrasted with the generic approximate expressions of Vangeneugden et al. [15]. Data from an epileptic-seizures trial are analyzed and correlation functions derived. It is shown that the proposed extension strongly outperforms the classical GLMM. 相似文献
12.
There is an emerging consensus in empirical finance that realized volatility series typically display long range dependence with a memory parameter (d) around 0.4 (Andersen et al., 2001; Martens et al., 2004). The present article provides some illustrative analysis of how long memory may arise from the accumulative process underlying realized volatility. The article also uses results in Lieberman and Phillips (2004, 2005) to refine statistical inference about d by higher order theory. Standard asymptotic theory has an O(n ?1/2) error rate for error rejection probabilities, and the theory used here refines the approximation to an error rate of o(n ?1/2). The new formula is independent of unknown parameters, is simple to calculate and user-friendly. The method is applied to test whether the reported long memory parameter estimates of Andersen et al. (2001) and Martens et al. (2004) differ significantly from the lower boundary (d = 0.5) of nonstationary long memory, and generally confirms earlier findings. 相似文献
13.
We consider non-parametric estimation of a continuous cdf of a random vector (X 1, X 2). With bivariate RC data, it is stated in van der Laan (1996, p. 59810, Ann. Statist.), Quale et al. (2006, JASA) etc. that “it is well known that the NPMLE for continuous data is inconsistent (Tsai et al. (1986)).” The claim is based on a result in Tsai et al. (1986, p.1352, Ann. Statist.) that if X 1 is right censored but not X 2, then common ways for defining one NPMLE lead to inconsistency. If X 1 is right censored and X 2 is type I right-censored (which includes the case in Tsai et al.), we present a consistent NPMLE. The result corrects a common misinterpretation of Tsai's example (Tsai et al., 1986, Ann. Statist.). 相似文献
14.
Here, we apply the smoothing technique proposed by Chaubey et al. (2007) for the empirical survival function studied in Bagai and Prakasa Rao (1991) for a sequence of stationary non-negative associated random variables.The derivative of this estimator in turn is used to propose a nonparametric density estimator. The asymptotic properties of the resulting estimators are studied and contrasted with some other competing estimators. A simulation study is carried out comparing the recent estimator based on the Poisson weights (Chaubey et al., 2011) showing that the two estimators have comparable finite sample global as well as local behavior. 相似文献
15.
In this article, we obtained a dependence measure for generalized Farlie-Gumbel-Morgenstern (FGM) family in view of Kochar and Gupta (1987) and then compared this measure with Spearman's rho and Kendall's tau in FGM family. Moreover, we evaluated the empirical power of the class of distribution-free tests proposed by Kochar and Gupta (1987, 1990) based on exact distribution of a U-statistics. This is derived via a simulation study for sample of sizes n = 6, 8, 10, 12, 16, and 20. Also, we compared our simulation results with those achieved by Amini et al. (2010) and Güven and Kotz (2008). 相似文献
16.
A Bottom-Up Dynamic Model of Portfolio Credit Risk with Stochastic Intensities and Random Recoveries
Tomasz R. Bielecki Areski Cousin Stéphane Crépey Alexander Herbertsson 《统计学通讯:理论与方法》2014,43(7):1362-1389
In Bielecki et al. (2014a), the authors introduced a Markov copula model of portfolio credit risk where pricing and hedging can be done in a sound theoretical and practical way. Further theoretical backgrounds and practical details are developed in Bielecki et al. (2014b,c) where numerical illustrations assumed deterministic intensities and constant recoveries. In the present paper, we show how to incorporate stochastic default intensities and random recoveries in the bottom-up modeling framework of Bielecki et al. (2014a) while preserving numerical tractability. These two features are of primary importance for applications like CVA computations on credit derivatives (Assefa et al., 2011; Bielecki et al., 2012), as CVA is sensitive to the stochastic nature of credit spreads and random recoveries allow to achieve satisfactory calibration even for “badly behaved” data sets. This article is thus a complement to Bielecki et al. (2014a), Bielecki et al. (2014b) and Bielecki et al. (2014c). 相似文献
17.
Magda (1980) and Hedayat (1981) first considered the construction of circular strongly balanced repeated measurements designs. Sen and Mukerjee (1987) and Roy (1988) considered the optimality and existence of circular strongly balanced repeated measurements designs based on the method of differences and Hamiltonian decomposition of lexicographic product of two graphs. In this article, we consider the construction of circular strongly balanced repeated measurements designs using the newly proposed method called cyclic shifts, and propose some new designs for p < v. 相似文献
18.
Suchandan Kayal 《统计学通讯:理论与方法》2018,47(20):4938-4957
Several probability distributions such as power-Pareto distribution (see Gilchrist 2000 and Hankin and Lee 2006), various forms of lambda distributions (see Ramberg and Schmeiser 1974 and Freimer et al. 1988), Govindarajulu distribution (see Nair, Sankaran, and Vineshkumar 2012), etc., do not have manageable distribution functions, though they have tractable quantile functions. Hence, analytical study of the properties of Chernoff distance of two random variables associated with these distributions via traditional distribution function-based tool becomes difficult. To make this simple, in this paper, we introduce quantile-based Chernoff distance for (left or right) truncated random variables and study its various properties. Some useful bounds as well as characterization results are obtained. 相似文献
19.
Mike G. Tsionas 《统计学通讯:理论与方法》2018,47(12):3022-3028
The properties of high-dimensional Bingham distributions have been studied by Kume and Walker (2014). Fallaize and Kypraios (2016) propose the Bayesian inference for the Bingham distribution and they use developments in Bayesian computation for distributions with doubly intractable normalizing constants (Møller et al. 2006; Murray, Ghahramani, and MacKay 2006). However, they rely heavily on two Metropolis updates that they need to tune. In this article, we propose instead a model selection with the marginal likelihood. 相似文献
20.
This paper addresses a generalization of the bivariate Cauchy distribution discussed by Fang et al. (1990), derived from a trivariate normal distribution with a general correlation matrix. We obtain explicit expressions for the joint distribution function and joint density function, and show that they reduce in a special case to the corresponding expressions of Fang et al. (1990). Finally, we show that this generalized distribution is useful in determining the orthant probability of a bivariate skew-normal distribution of Azzalini and Dalla Valle (1996). 相似文献