首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The complication in analyzing tumor data is that the tumors detected in a screening program tend to be slowly progressive tumors, which is the so-called left-truncated sampling that is inherent in screening studies. Under the assumption that all subjects have the same tumor growth function, Ghosh (2008 Ghosh, D. (2008). Proportional hazards regression for cancer studies. Biometrics 64:141148.[Crossref], [PubMed], [Web of Science ®] [Google Scholar]) developed estimation procedures for the Cox proportional hazards model. Shen (2011a Shen, P.-S. (2011a). Proportional hazards regression for cancer screening data. J. Stat. Comput. Simul. 18:367377.[Taylor &; Francis Online], [Web of Science ®] [Google Scholar]) demonstrated that Ghosh (2008 Ghosh, D. (2008). Proportional hazards regression for cancer studies. Biometrics 64:141148.[Crossref], [PubMed], [Web of Science ®] [Google Scholar])'s approach can be extended to the case when each subject has a specific growth function. In this article, under linear transformation model, we present a general framework to the analysis of data from cancer screening studies. We developed estimation procedures under linear transformation model, which includes Cox's model as a special case. A simulation study is conducted to demonstrate the potential usefulness of the proposed estimators.  相似文献   

2.
In this article, necessary conditions for comparing order statistics from distributions with regularly varying tails are discussed in terms of various stochastic orders. A necessary and sufficient condition for stochastically comparing tail behaviors of order statistics is derived. The main results generalize and recover some results in Kleiber (2002 Kleiber, C. (2002). Variability ordering of heavy-tailed distributions with applications to order statistics. Statist. Probab. Lett. 58:381388.[Crossref], [Web of Science ®] [Google Scholar], 2004 Kleiber, C. (2004). Lorenz ordering of order statistics from log-logistic and related distributions. J. Statist. Plann. Infer. 120:2004.[Crossref], [Web of Science ®] [Google Scholar]). Extensions to coherent systems are mentioned as well.  相似文献   

3.
Let X1, X2, … be a sequence of stationary standardized Gaussian random fields. The almost sure limit theorem for the maxima of stationary Gaussian random fields is established. Our results extend and improve the results in Csáki and Gonchigdanzan (2002 Csáki, E., Gonchigdanzan, K. (2002). Almost sure limit theorems for the maximum of stationary Gaussian sequences. Stat. Probab. Lett. 58:195203.[Crossref], [Web of Science ®] [Google Scholar]) and Choi (2010 Choi, H. (2010). Almost sure limit theorem for stationary Gaussian random fields. J. Korean Stat. Soc. 39:449454.[Crossref], [Web of Science ®] [Google Scholar]).  相似文献   

4.
The objective of this paper is to study U-type designs for Bayesian non parametric response surface prediction under correlated errors. The asymptotic Bayes criterion is developed in terms of the asymptotic approach of Mitchell et al. (1994 Mitchell, T., Sacks, J., Ylvisaker, D. (1994). Asymptotic Bayes criteria for nonparametric response surface design. Ann. Stat. 22:634651.[Crossref], [Web of Science ®] [Google Scholar]) for a more general covariance kernel proposed by Chatterjee and Qin (2011 Chatterjee, K., Qin, H. (2011). Generalized discrete discrepancy and its applications in experimental designs. J. Stat. Plann. Inference 141:951960.[Crossref], [Web of Science ®] [Google Scholar]). A relationship between the asymptotic Bayes criterion and other criteria, such as orthogonality and aberration, is then developed. A lower bound for the criterion is also obtained, and numerical results show that this lower bound is tight. The established results generalize those of Yue et al. (2011 Yue, R.X., Qin, H., Chatterjee, K. (2011). Optimal U-type design for Bayesian nonparametric multiresponse prediction. J. Stat. Plann. Inference 141:24722479.[Crossref], [Web of Science ®] [Google Scholar]) from symmetrical case to asymmetrical U-type designs.  相似文献   

5.
This article proposes an asymptotic expansion for the Studentized linear discriminant function using two-step monotone missing samples under multivariate normality. The asymptotic expansions related to discriminant function have been obtained for complete data under multivariate normality. The result derived by Anderson (1973 Anderson , T. W. ( 1973 ). An asymptotic expansion of the distribution of the Studentized classification statistic W . The Annals of Statistics 1 : 964972 .[Crossref], [Web of Science ®] [Google Scholar]) plays an important role in deciding the cut-off point that controls the probabilities of misclassification. This article provides an extension of the result derived by Anderson (1973 Anderson , T. W. ( 1973 ). An asymptotic expansion of the distribution of the Studentized classification statistic W . The Annals of Statistics 1 : 964972 .[Crossref], [Web of Science ®] [Google Scholar]) in the case of two-step monotone missing samples under multivariate normality. Finally, numerical evaluations by Monte Carlo simulations were also presented.  相似文献   

6.
Since the seminal paper of Ghirardato (1997 Ghirardato, P. 1997. On the independence for non-additive measures, with a Fubini theorem. Journal of Economic Theory 73:26191.[Crossref], [Web of Science ®] [Google Scholar]), it is known that Fubini theorem for non additive measures can be available only for functions as “slice-comonotonic” in the framework of product algebra. Later, inspired by Ghirardato (1997 Ghirardato, P. 1997. On the independence for non-additive measures, with a Fubini theorem. Journal of Economic Theory 73:26191.[Crossref], [Web of Science ®] [Google Scholar]), Chateauneuf and Lefort (2008 Chateauneuf, A., and J. P. Lefort. 2008. Some Fubini theorems on product σ-algebras for non-additive measures. International Journal of Approximate Reasoning 48:68696.[Crossref], [Web of Science ®] [Google Scholar]) obtained some Fubini theorems for non additive measures in the framework of product σ-algebra. In this article, we study Fubini theorem for non additive measures in the framework of g-expectation. We give some different assumptions that provide Fubini theorem in the framework of g-expectation.  相似文献   

7.
The main purpose of the present work is to introduce and investigate a simple kernel procedure based on marginal integration that estimates the regression function for stationary and ergodic continuous time processes in the setting of the additive model introduced by Stone (1985 Stone, C.J. (1985). Additive regression and other nonparametric models. Ann. Stat. 13(2):689705.[Crossref], [Web of Science ®] [Google Scholar]). We obtain the uniform almost sure consistency with exact rate and the asymptotic normality of the kernel-type estimators of the components of the additive model. Asymptotic properties of these estimators are obtained, under mild conditions, by means of martingale approaches. Finally, a general notion of the bootstrapped additive components, constructed by exchangeably weighting sample, is presented.  相似文献   

8.
Credibility formula has been developed in many fields of actuarial sciences. Based upon Payandeh (2010 Payandeh, A.T. (2010). A new approach to the credibility formula. Insur.: Math. Econ. 46(2):334338.[Crossref], [Web of Science ®] [Google Scholar]), this article extends concept of credibility formula to relatively premium of a given rate-making system. More precisely, it calculates Payandeh’s (2010 Payandeh, A.T. (2010). A new approach to the credibility formula. Insur.: Math. Econ. 46(2):334338.[Crossref], [Web of Science ®] [Google Scholar]) credibility factor for zero-inflated Poisson gamma distributions with respect to several loss functions. A comparison study has been given.  相似文献   

9.
ABSTRACT

In this article, the linear models with measurement error both in the response and in the covariates are considered. Following Shalabh et al. (2007 Shalabh, Garg, G., Misra, N. (2007). Restricted regression estimation in measurement error models. Comput. Stat. Data Anal. 52:11491166.[Crossref], [Web of Science ®] [Google Scholar], 2009 Shalabh, Garg, G., Misra, N. (2009). Use of prior information in the consistent estimation of regression coefficients in measurement error models. J. Multivariate Anal. 100:14981520.[Crossref], [Web of Science ®] [Google Scholar]), we propose several restricted estimators for the regression coefficients. The consistency and asymptotic normality of the restricted estimators are established. Furthermore, we also discuss the superiority of the restricted estimators to unrestricted estimators under Pitman closeness criterion. We also develop several variance estimators and establish their asymptotic distributions. Wald-type statistics are constructed for testing the linear restrictions. Finally, Monte Carlo simulations are conducted to illustrate the finite-sample properties of the proposed estimators.  相似文献   

10.
Due to Godambe (1985 Godambe, V.P. (1985). The foundation of finite sample estimation in stochastic processes. Biometrika 72:419428.[Crossref], [Web of Science ®] [Google Scholar]), one can obtain the Godambe optimum estimating functions (EFs) each of which is optimum (in the sense of maximizing the Godambe information) within a linear class of EFs. Quasi-likelihood scores can be viewed as special cases of the Godambe optimum EFs (see, for instance, Hwang and Basawa, 2011 Hwang, S.Y., Basawa, I.V. (2011). Godambe estimating functions and asymptotic optimal inference. Stat. Probab. Lett. 81:11211127.[Crossref], [Web of Science ®] [Google Scholar]). The paper concerns conditionally heteroscedastic time series with unknown likelihood. Power transformations are introduced in innovations to construct a class of Godambe optimum EFs. A “best” power transformation for Godambe innovation is then obtained via maximizing the “profile” Godambe information. To illustrate, the KOrea Stock Prices Index is analyzed for which absolute value transformation and square transformation are recommended according to the ARCH(1) and GARCH(1,1) models, respectively.  相似文献   

11.
Sample size estimation for comparing the rates of change in two-arm repeated measurements has been investigated by many investigators. In contrast, the literature has paid relatively less attention to sample size estimation for studies with multi-arm repeated measurements where the design and data analysis can be more complex than two-arm trials. For continuous outcomes, Jung and Ahn (2004 Jung, S., Ahn, C. (2004). K-sample test and sample size calculation for comparing slopes in data with repeated measurements. Biometrical J. 46(5):554564.[Crossref], [Web of Science ®] [Google Scholar]) and Zhang and Ahn (2013 Zhang, S., Ahn, C. (2013). Sample size calculation for comparing time-averaged responses in k-group repeated measurement studies. Comput. Stat. Data Anal. 58:283291.[Crossref], [PubMed], [Web of Science ®] [Google Scholar]) have presented sample size formulas to compare the rates of change and time-averaged responses in multi-arm trials, using the generalized estimating equation (GEE) approach. To our knowledge, there has been no corresponding development for multi-arm trials with count outcomes. We present a sample size formula for comparing the rates of change in multi-arm repeated count outcomes using the GEE approach that accommodates various correlation structures, missing data patterns, and unbalanced designs. We conduct simulation studies to assess the performance of the proposed sample size formula under a wide range of designing configurations. Simulation results suggest that empirical type I error and power are maintained close to their nominal levels. The proposed method is illustrated using an epileptic clinical trial example.  相似文献   

12.
The construction of some wider families of continuous distributions obtained recently has attracted applied statisticians due to the analytical facilities available for easy computation of special functions in programming software. We study some general mathematical properties of the log-gamma-generated (LGG) family defined by Amini, MirMostafaee, and Ahmadi (2014 Amini, M., S. M. T. K. MirMostafaee, and J. Ahmadi. 2014. Log-gamma-generated families of distributions. Statistics 48:91332.[Taylor &; Francis Online], [Web of Science ®] [Google Scholar]). It generalizes the gamma-generated class pioneered by Risti? and Balakrishnan (2012 Risti?, M. M., and N. Balakrishnan. 2012. The gamma exponentiated exponential distribution. Journal of Statistical Computation and Simulation 82:1191206.[Taylor &; Francis Online], [Web of Science ®] [Google Scholar]). We present some of its special models and derive explicit expressions for the ordinary and incomplete moments, generating and quantile functions, mean deviations, Bonferroni and Lorenz curves, Shannon entropy, Rényi entropy, reliability, and order statistics. Models in this family are compared with nested and non nested models. Further, we propose and study a new LGG family regression model. We demonstrate that the new regression model can be applied to censored data since it represents a parametric family of models and therefore can be used more effectively in the analysis of survival data. We prove that the proposed models can provide consistently better fits in some applications to real data sets.  相似文献   

13.
This study considers efficient mixture designs for the approximation of the response surface of a quantile regression model, which is a second degree polynomial, by a first degree polynomial in the proportions of q components. Instead of least squares estimation in the traditional regression analysis, the objective function in quantile regression models is a weighted sum of absolute deviations and the least absolute deviations (LAD) estimation technique should be used (Bassett and Koenker, 1982 Bassett, G., Koenker, R. (1982). An empirical quantile function for linear models with i.i.d. errors. Journal of the American Statistical Association 77:407415.[Taylor &; Francis Online], [Web of Science ®] [Google Scholar]; Koenker and Bassett, 1978 Koenker, R., Bassett, G. (1978). Regression quantiles. Econometrica 46(1):3350.[Crossref], [Web of Science ®] [Google Scholar]). Therefore, the standard optimal mixture designs like the D-optimal or A-optimal mixture designs for the least squared estimation are not appropriate. This study explores mixture designs that minimize the bias between the approximated 1st-degree polynomial and a 2nd-degree polynomial response surfaces by the LAD estimation. In contrast to the standard optimal mixture designs for the least squared estimation, the efficient designs might contain elementary centroid design points of degrees higher than two. An example of a portfolio with five assets is given to illustrate the proposed efficient mixture designs in determining the marginal contribution of risks by individual assets in the portfolio.  相似文献   

14.
The probability matching prior for linear functions of Poisson parameters is derived. A comparison is made between the confidence intervals obtained by Stamey and Hamilton (2006 Stamey, J., Hamilton, C. (2006). A note on confidence intervals for a linear function of Poisson rates. Commun. Statist. Simul. &; Computat. 35(4):849856.[Taylor &; Francis Online], [Web of Science ®] [Google Scholar]), and the intervals derived by us when using the Jeffreys’ and probability matching priors. The intervals obtained from the Jeffreys’ prior are in some cases fiducial intervals (Krishnamoorthy and Lee, 2010 Krishnamoorthy, K., Lee, M. (2010). Inference for functions of parameters in discrete distributions based on fiducial approach: Binomial and Poisson cases. J. Statist. Plann. Infere. 140(5):11821192.[Crossref], [Web of Science ®] [Google Scholar]). A weighted Monte Carlo method is used for the probability matching prior. The power and size of the test, using Bayesian methods, is compared to tests used by Krishnamoorthy and Thomson (2004 Krishnamoorthy, K., Thomson, J. (2004). A more powerful test for comparing two Poisson means. J. Statist. Plann. Infere. 119(1):2335.[Crossref], [Web of Science ®] [Google Scholar]). The Jeffreys’, probability matching and two other priors are used.  相似文献   

15.
In this article, we establish the complete moment convergence of a moving-average process generated by a class of random variables satisfying the Rosenthal-type maximal inequality and the week mean dominating condition. On the one hand, we give the correct proof for the case p = 1 in Ko (2015 Ko, M.H. (2015). Complete moment convergence of moving average process generated by a class of random variables. J. Inequalities Appl. 2015(1):19. Article ID 225.[Crossref], [Web of Science ®] [Google Scholar]); on the other hand, we also consider the case αp = 1 which was not considered in Ko (2015 Ko, M.H. (2015). Complete moment convergence of moving average process generated by a class of random variables. J. Inequalities Appl. 2015(1):19. Article ID 225.[Crossref], [Web of Science ®] [Google Scholar]). The results obtained in this article generalize some corresponding ones for some dependent sequences.  相似文献   

16.
This paper presents a new variable weight method, called the singular value decomposition (SVD) approach, for Kohonen competitive learning (KCL) algorithms based on the concept of Varshavsky et al. [18 R. Varshavsky, A. Gottlieb, M. Linial, and D. Horn, Novel unsupervised feature filtering of bilogical data, Bioinformatics 22 (2006), pp. 507513.[Crossref], [PubMed], [Web of Science ®] [Google Scholar]]. Integrating the weighted fuzzy c-means (FCM) algorithm with KCL, in this paper, we propose a weighted fuzzy KCL (WFKCL) algorithm. The goal of the proposed WFKCL algorithm is to reduce the clustering error rate when data contain some noise variables. Compared with the k-means, FCM and KCL with existing variable-weight methods, the proposed WFKCL algorithm with the proposed SVD's weight method provides a better clustering performance based on the error rate criterion. Furthermore, the complexity of the proposed SVD's approach is less than Pal et al. [17 S.K. Pal, R.K. De, and J. Basak, Unsupervised feature evaluation: a neuro-fuzzy approach, IEEE. Trans. Neural Netw. 11 (2000), pp. 366376.[Crossref], [PubMed], [Web of Science ®] [Google Scholar]], Wang et al. [19 X.Z. Wang, Y.D. Wang, and L.J. Wang, Improving fuzzy c-means clustering based on feature-weight learning, Pattern Recognit. Lett. 25 (2004), pp. 11231132.[Crossref], [Web of Science ®] [Google Scholar]] and Hung et al. [9 W. -L. Hung, M. -S. Yang, and D. -H. Chen, Bootstrapping approach to feature-weight selection in fuzzy c-means algorithms with an application in color image segmentation, Pattern Recognit. Lett. 29 (2008), pp. 13171325.[Crossref], [Web of Science ®] [Google Scholar]].  相似文献   

17.
We revisit the generalized midpoint frequency polygons of Scott (1985), and the edge frequency polygons of Jones et al. (1998 Jones, M.C., Samiuddin, M., Al-Harbey, A.H., Maatouk, T. A.H. (1998). The edge frequency polygon. Biometrika 85:235239.[Crossref], [Web of Science ®] [Google Scholar]) and Dong and Zheng (2001 Dong, J.P., Zheng, C. (2001). Generalized edge frequency polygon for density estimation. Statist. Probab. Lett. 55:137145.[Crossref], [Web of Science ®] [Google Scholar]). Their estimators are linear interpolants of the appropriate values above the bin centers or edges, those values being weighted averages of the heights of r, rN, neighboring histogram bins. We propose a simple kernel evaluation method to generate weights for binned values. The proposed kernel method can provide near-optimal weights in the sense of minimizing asymptotic mean integrated square error. In addition, we prove that the discrete uniform weights minimize the variance of the generalized frequency polygon under some mild conditions. Analogous results are obtained for the generalized frequency polygon based on linearly prebinned data. Finally, we use two examples and a simulation study to compare the generalized midpoint and edge frequency polygons.  相似文献   

18.
In analogy with the weighted Shannon entropy proposed by Belis and Guiasu (1968 Belis, M., Guiasu, S. (1968). A quantitative-qualitative measure of information in cybernetic systems. IEEE Trans. Inf. Th. IT-4:593594.[Crossref], [Web of Science ®] [Google Scholar]) and Guiasu (1986 Guiasu, S. (1986). Grouping data by using the weighted entropy. J. Stat. Plann. Inference 15:6369.[Crossref], [Web of Science ®] [Google Scholar]), we introduce a new information measure called weighted cumulative residual entropy (WCRE). This is based on the cumulative residual entropy (CRE), which is introduced by Rao et al. (2004 Rao, M., Chen, Y., Vemuri, B.C., Wang, F. (2004). Cumulative residual entropy: a new measure of information. IEEE Trans. Info. Theory 50(6):12201228.[Crossref], [Web of Science ®] [Google Scholar]). This new information measure is “length-biased” shift dependent that assigns larger weights to larger values of random variable. The properties of WCRE and a formula relating WCRE and weighted Shannon entropy are given. Related studies of reliability theory is covered. Our results include inequalities and various bounds to the WCRE. Conditional WCRE and some of its properties are discussed. The empirical WCRE is proposed to estimate this new information measure. Finally, strong consistency and central limit theorem are provided.  相似文献   

19.
Two-period crossover design is one of the commonly used designs in clinical trials. But, the estimation of treatment effect is complicated by the possible presence of carryover effect. It is known that ignoring the carryover effect when it exists can lead to poor estimates of the treatment effect. The classical approach by Grizzle (1965 Grizzle, J.E. (1965). The two-period change-over design and its use in clinical trials. Biometrics 21:467480. See Grizzle (1974) for corrections.[Crossref], [PubMed], [Web of Science ®] [Google Scholar]) consists of two stages. First, a preliminary test is conducted on carryover effect. If the carryover effect is significant, analysis is based only on data from period one; otherwise, analysis is based on data from both periods. A Bayesian approach with improper priors was proposed by Grieve (1985 Grieve, A.P. (1985). A Bayesian analysis of the two-period crossover design for clinical trials. Biometrics 41:979990.[Crossref], [PubMed], [Web of Science ®] [Google Scholar]) which uses a mixture of two models: a model with carryover effect and another without. The indeterminacy of the Bayes factor due to the arbitrary constant in the improper prior was addressed by assigning a minimally discriminatory value to the constant. In this article, we present an objective Bayesian estimation approach to the two-period crossover design which is also based on a mixture model, but using the commonly recommended Zellner–Siow g-prior. We provide simulation studies and a real data example and compare the numerical results with Grizzle (1965 Grizzle, J.E. (1965). The two-period change-over design and its use in clinical trials. Biometrics 21:467480. See Grizzle (1974) for corrections.[Crossref], [PubMed], [Web of Science ®] [Google Scholar])’s and Grieve (1985 Grieve, A.P. (1985). A Bayesian analysis of the two-period crossover design for clinical trials. Biometrics 41:979990.[Crossref], [PubMed], [Web of Science ®] [Google Scholar])’s approaches.  相似文献   

20.
In this paper, we prove the complete convergence for the weighted sums of negatively associated random variables with multidimensional indices. The main result generalizes Theorem 2.1 in Kuczmaszewska and Lagodowski (2011 Kuczmaszewska, A., Lagodowski, Z.A. (2011). Convergence rates in the SLLN for some classes of dependent random field. J. Math. Anal. Appl. 380:571584.[Crossref], [Web of Science ®] [Google Scholar]) to the case of weighted sums.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号