共查询到20条相似文献,搜索用时 31 毫秒
1.
Pao-Sheng Shen 《统计学通讯:理论与方法》2017,46(4):1916-1926
The complication in analyzing tumor data is that the tumors detected in a screening program tend to be slowly progressive tumors, which is the so-called left-truncated sampling that is inherent in screening studies. Under the assumption that all subjects have the same tumor growth function, Ghosh (2008) developed estimation procedures for the Cox proportional hazards model. Shen (2011a) demonstrated that Ghosh (2008)'s approach can be extended to the case when each subject has a specific growth function. In this article, under linear transformation model, we present a general framework to the analysis of data from cancer screening studies. We developed estimation procedures under linear transformation model, which includes Cox's model as a special case. A simulation study is conducted to demonstrate the potential usefulness of the proposed estimators. 相似文献
2.
In this article, necessary conditions for comparing order statistics from distributions with regularly varying tails are discussed in terms of various stochastic orders. A necessary and sufficient condition for stochastically comparing tail behaviors of order statistics is derived. The main results generalize and recover some results in Kleiber (2002, 2004). Extensions to coherent systems are mentioned as well. 相似文献
3.
Qunying Wu 《统计学通讯:理论与方法》2017,46(8):3667-3675
Let X1, X2, … be a sequence of stationary standardized Gaussian random fields. The almost sure limit theorem for the maxima of stationary Gaussian random fields is established. Our results extend and improve the results in Csáki and Gonchigdanzan (2002) and Choi (2010). 相似文献
4.
The objective of this paper is to study U-type designs for Bayesian non parametric response surface prediction under correlated errors. The asymptotic Bayes criterion is developed in terms of the asymptotic approach of Mitchell et al. (1994) for a more general covariance kernel proposed by Chatterjee and Qin (2011). A relationship between the asymptotic Bayes criterion and other criteria, such as orthogonality and aberration, is then developed. A lower bound for the criterion is also obtained, and numerical results show that this lower bound is tight. The established results generalize those of Yue et al. (2011) from symmetrical case to asymmetrical U-type designs. 相似文献
5.
This article proposes an asymptotic expansion for the Studentized linear discriminant function using two-step monotone missing samples under multivariate normality. The asymptotic expansions related to discriminant function have been obtained for complete data under multivariate normality. The result derived by Anderson (1973) plays an important role in deciding the cut-off point that controls the probabilities of misclassification. This article provides an extension of the result derived by Anderson (1973) in the case of two-step monotone missing samples under multivariate normality. Finally, numerical evaluations by Monte Carlo simulations were also presented. 相似文献
6.
Since the seminal paper of Ghirardato (1997), it is known that Fubini theorem for non additive measures can be available only for functions as “slice-comonotonic” in the framework of product algebra. Later, inspired by Ghirardato (1997), Chateauneuf and Lefort (2008) obtained some Fubini theorems for non additive measures in the framework of product σ-algebra. In this article, we study Fubini theorem for non additive measures in the framework of g-expectation. We give some different assumptions that provide Fubini theorem in the framework of g-expectation. 相似文献
7.
The main purpose of the present work is to introduce and investigate a simple kernel procedure based on marginal integration that estimates the regression function for stationary and ergodic continuous time processes in the setting of the additive model introduced by Stone (1985). We obtain the uniform almost sure consistency with exact rate and the asymptotic normality of the kernel-type estimators of the components of the additive model. Asymptotic properties of these estimators are obtained, under mild conditions, by means of martingale approaches. Finally, a general notion of the bootstrapped additive components, constructed by exchangeably weighting sample, is presented. 相似文献
8.
Amir T. Payandeh Najafabadi Fatemeh Atatalab Maryam Omidi Najafabadi 《统计学通讯:理论与方法》2017,46(1):415-426
Credibility formula has been developed in many fields of actuarial sciences. Based upon Payandeh (2010), this article extends concept of credibility formula to relatively premium of a given rate-making system. More precisely, it calculates Payandeh’s (2010) credibility factor for zero-inflated Poisson gamma distributions with respect to several loss functions. A comparison study has been given. 相似文献
9.
ABSTRACTIn this article, the linear models with measurement error both in the response and in the covariates are considered. Following Shalabh et al. (2007, 2009), we propose several restricted estimators for the regression coefficients. The consistency and asymptotic normality of the restricted estimators are established. Furthermore, we also discuss the superiority of the restricted estimators to unrestricted estimators under Pitman closeness criterion. We also develop several variance estimators and establish their asymptotic distributions. Wald-type statistics are constructed for testing the linear restrictions. Finally, Monte Carlo simulations are conducted to illustrate the finite-sample properties of the proposed estimators. 相似文献
10.
Sunah Chung 《统计学通讯:理论与方法》2017,46(14):6899-6908
Due to Godambe (1985), one can obtain the Godambe optimum estimating functions (EFs) each of which is optimum (in the sense of maximizing the Godambe information) within a linear class of EFs. Quasi-likelihood scores can be viewed as special cases of the Godambe optimum EFs (see, for instance, Hwang and Basawa, 2011). The paper concerns conditionally heteroscedastic time series with unknown likelihood. Power transformations are introduced in innovations to construct a class of Godambe optimum EFs. A “best” power transformation for Godambe innovation is then obtained via maximizing the “profile” Godambe information. To illustrate, the KOrea Stock Prices Index is analyzed for which absolute value transformation and square transformation are recommended according to the ARCH(1) and GARCH(1,1) models, respectively. 相似文献
11.
Sample size estimation for comparing the rates of change in two-arm repeated measurements has been investigated by many investigators. In contrast, the literature has paid relatively less attention to sample size estimation for studies with multi-arm repeated measurements where the design and data analysis can be more complex than two-arm trials. For continuous outcomes, Jung and Ahn (2004) and Zhang and Ahn (2013) have presented sample size formulas to compare the rates of change and time-averaged responses in multi-arm trials, using the generalized estimating equation (GEE) approach. To our knowledge, there has been no corresponding development for multi-arm trials with count outcomes. We present a sample size formula for comparing the rates of change in multi-arm repeated count outcomes using the GEE approach that accommodates various correlation structures, missing data patterns, and unbalanced designs. We conduct simulation studies to assess the performance of the proposed sample size formula under a wide range of designing configurations. Simulation results suggest that empirical type I error and power are maintained close to their nominal levels. The proposed method is illustrated using an epileptic clinical trial example. 相似文献
12.
Gauss M. Cordeiro Marcelo Bourguignon Edwin M. M. Ortega Thiago G. Ramires 《统计学通讯:理论与方法》2018,47(5):1050-1070
The construction of some wider families of continuous distributions obtained recently has attracted applied statisticians due to the analytical facilities available for easy computation of special functions in programming software. We study some general mathematical properties of the log-gamma-generated (LGG) family defined by Amini, MirMostafaee, and Ahmadi (2014). It generalizes the gamma-generated class pioneered by Risti? and Balakrishnan (2012). We present some of its special models and derive explicit expressions for the ordinary and incomplete moments, generating and quantile functions, mean deviations, Bonferroni and Lorenz curves, Shannon entropy, Rényi entropy, reliability, and order statistics. Models in this family are compared with nested and non nested models. Further, we propose and study a new LGG family regression model. We demonstrate that the new regression model can be applied to censored data since it represents a parametric family of models and therefore can be used more effectively in the analysis of survival data. We prove that the proposed models can provide consistently better fits in some applications to real data sets. 相似文献
13.
This study considers efficient mixture designs for the approximation of the response surface of a quantile regression model, which is a second degree polynomial, by a first degree polynomial in the proportions of q components. Instead of least squares estimation in the traditional regression analysis, the objective function in quantile regression models is a weighted sum of absolute deviations and the least absolute deviations (LAD) estimation technique should be used (Bassett and Koenker, 1982; Koenker and Bassett, 1978). Therefore, the standard optimal mixture designs like the D-optimal or A-optimal mixture designs for the least squared estimation are not appropriate. This study explores mixture designs that minimize the bias between the approximated 1st-degree polynomial and a 2nd-degree polynomial response surfaces by the LAD estimation. In contrast to the standard optimal mixture designs for the least squared estimation, the efficient designs might contain elementary centroid design points of degrees higher than two. An example of a portfolio with five assets is given to illustrate the proposed efficient mixture designs in determining the marginal contribution of risks by individual assets in the portfolio. 相似文献
14.
The probability matching prior for linear functions of Poisson parameters is derived. A comparison is made between the confidence intervals obtained by Stamey and Hamilton (2006), and the intervals derived by us when using the Jeffreys’ and probability matching priors. The intervals obtained from the Jeffreys’ prior are in some cases fiducial intervals (Krishnamoorthy and Lee, 2010). A weighted Monte Carlo method is used for the probability matching prior. The power and size of the test, using Bayesian methods, is compared to tests used by Krishnamoorthy and Thomson (2004). The Jeffreys’, probability matching and two other priors are used. 相似文献
15.
In this article, we establish the complete moment convergence of a moving-average process generated by a class of random variables satisfying the Rosenthal-type maximal inequality and the week mean dominating condition. On the one hand, we give the correct proof for the case p = 1 in Ko (2015); on the other hand, we also consider the case αp = 1 which was not considered in Ko (2015). The results obtained in this article generalize some corresponding ones for some dependent sequences. 相似文献
16.
This paper presents a new variable weight method, called the singular value decomposition (SVD) approach, for Kohonen competitive learning (KCL) algorithms based on the concept of Varshavsky et al. [18]. Integrating the weighted fuzzy c-means (FCM) algorithm with KCL, in this paper, we propose a weighted fuzzy KCL (WFKCL) algorithm. The goal of the proposed WFKCL algorithm is to reduce the clustering error rate when data contain some noise variables. Compared with the k-means, FCM and KCL with existing variable-weight methods, the proposed WFKCL algorithm with the proposed SVD's weight method provides a better clustering performance based on the error rate criterion. Furthermore, the complexity of the proposed SVD's approach is less than Pal et al. [17], Wang et al. [19] and Hung et al. [9]. 相似文献
17.
We revisit the generalized midpoint frequency polygons of Scott (1985), and the edge frequency polygons of Jones et al. (1998) and Dong and Zheng (2001). Their estimators are linear interpolants of the appropriate values above the bin centers or edges, those values being weighted averages of the heights of r, r ∈ N, neighboring histogram bins. We propose a simple kernel evaluation method to generate weights for binned values. The proposed kernel method can provide near-optimal weights in the sense of minimizing asymptotic mean integrated square error. In addition, we prove that the discrete uniform weights minimize the variance of the generalized frequency polygon under some mild conditions. Analogous results are obtained for the generalized frequency polygon based on linearly prebinned data. Finally, we use two examples and a simulation study to compare the generalized midpoint and edge frequency polygons. 相似文献
18.
In analogy with the weighted Shannon entropy proposed by Belis and Guiasu (1968) and Guiasu (1986), we introduce a new information measure called weighted cumulative residual entropy (WCRE). This is based on the cumulative residual entropy (CRE), which is introduced by Rao et al. (2004). This new information measure is “length-biased” shift dependent that assigns larger weights to larger values of random variable. The properties of WCRE and a formula relating WCRE and weighted Shannon entropy are given. Related studies of reliability theory is covered. Our results include inequalities and various bounds to the WCRE. Conditional WCRE and some of its properties are discussed. The empirical WCRE is proposed to estimate this new information measure. Finally, strong consistency and central limit theorem are provided. 相似文献
19.
Two-period crossover design is one of the commonly used designs in clinical trials. But, the estimation of treatment effect is complicated by the possible presence of carryover effect. It is known that ignoring the carryover effect when it exists can lead to poor estimates of the treatment effect. The classical approach by Grizzle (1965) consists of two stages. First, a preliminary test is conducted on carryover effect. If the carryover effect is significant, analysis is based only on data from period one; otherwise, analysis is based on data from both periods. A Bayesian approach with improper priors was proposed by Grieve (1985) which uses a mixture of two models: a model with carryover effect and another without. The indeterminacy of the Bayes factor due to the arbitrary constant in the improper prior was addressed by assigning a minimally discriminatory value to the constant. In this article, we present an objective Bayesian estimation approach to the two-period crossover design which is also based on a mixture model, but using the commonly recommended Zellner–Siow g-prior. We provide simulation studies and a real data example and compare the numerical results with Grizzle (1965)’s and Grieve (1985)’s approaches. 相似文献
20.
Mi-Hwa Ko 《统计学通讯:理论与方法》2014,43(17):3726-3732
In this paper, we prove the complete convergence for the weighted sums of negatively associated random variables with multidimensional indices. The main result generalizes Theorem 2.1 in Kuczmaszewska and Lagodowski (2011) to the case of weighted sums. 相似文献