首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Let X1, X2, … be a sequence of stationary standardized Gaussian random fields. The almost sure limit theorem for the maxima of stationary Gaussian random fields is established. Our results extend and improve the results in Csáki and Gonchigdanzan (2002 Csáki, E., Gonchigdanzan, K. (2002). Almost sure limit theorems for the maximum of stationary Gaussian sequences. Stat. Probab. Lett. 58:195203.[Crossref], [Web of Science ®] [Google Scholar]) and Choi (2010 Choi, H. (2010). Almost sure limit theorem for stationary Gaussian random fields. J. Korean Stat. Soc. 39:449454.[Crossref], [Web of Science ®] [Google Scholar]).  相似文献   

2.
The complication in analyzing tumor data is that the tumors detected in a screening program tend to be slowly progressive tumors, which is the so-called left-truncated sampling that is inherent in screening studies. Under the assumption that all subjects have the same tumor growth function, Ghosh (2008 Ghosh, D. (2008). Proportional hazards regression for cancer studies. Biometrics 64:141148.[Crossref], [PubMed], [Web of Science ®] [Google Scholar]) developed estimation procedures for the Cox proportional hazards model. Shen (2011a Shen, P.-S. (2011a). Proportional hazards regression for cancer screening data. J. Stat. Comput. Simul. 18:367377.[Taylor &; Francis Online], [Web of Science ®] [Google Scholar]) demonstrated that Ghosh (2008 Ghosh, D. (2008). Proportional hazards regression for cancer studies. Biometrics 64:141148.[Crossref], [PubMed], [Web of Science ®] [Google Scholar])'s approach can be extended to the case when each subject has a specific growth function. In this article, under linear transformation model, we present a general framework to the analysis of data from cancer screening studies. We developed estimation procedures under linear transformation model, which includes Cox's model as a special case. A simulation study is conducted to demonstrate the potential usefulness of the proposed estimators.  相似文献   

3.
The probability matching prior for linear functions of Poisson parameters is derived. A comparison is made between the confidence intervals obtained by Stamey and Hamilton (2006 Stamey, J., Hamilton, C. (2006). A note on confidence intervals for a linear function of Poisson rates. Commun. Statist. Simul. &; Computat. 35(4):849856.[Taylor &; Francis Online], [Web of Science ®] [Google Scholar]), and the intervals derived by us when using the Jeffreys’ and probability matching priors. The intervals obtained from the Jeffreys’ prior are in some cases fiducial intervals (Krishnamoorthy and Lee, 2010 Krishnamoorthy, K., Lee, M. (2010). Inference for functions of parameters in discrete distributions based on fiducial approach: Binomial and Poisson cases. J. Statist. Plann. Infere. 140(5):11821192.[Crossref], [Web of Science ®] [Google Scholar]). A weighted Monte Carlo method is used for the probability matching prior. The power and size of the test, using Bayesian methods, is compared to tests used by Krishnamoorthy and Thomson (2004 Krishnamoorthy, K., Thomson, J. (2004). A more powerful test for comparing two Poisson means. J. Statist. Plann. Infere. 119(1):2335.[Crossref], [Web of Science ®] [Google Scholar]). The Jeffreys’, probability matching and two other priors are used.  相似文献   

4.
Two-period crossover design is one of the commonly used designs in clinical trials. But, the estimation of treatment effect is complicated by the possible presence of carryover effect. It is known that ignoring the carryover effect when it exists can lead to poor estimates of the treatment effect. The classical approach by Grizzle (1965 Grizzle, J.E. (1965). The two-period change-over design and its use in clinical trials. Biometrics 21:467480. See Grizzle (1974) for corrections.[Crossref], [PubMed], [Web of Science ®] [Google Scholar]) consists of two stages. First, a preliminary test is conducted on carryover effect. If the carryover effect is significant, analysis is based only on data from period one; otherwise, analysis is based on data from both periods. A Bayesian approach with improper priors was proposed by Grieve (1985 Grieve, A.P. (1985). A Bayesian analysis of the two-period crossover design for clinical trials. Biometrics 41:979990.[Crossref], [PubMed], [Web of Science ®] [Google Scholar]) which uses a mixture of two models: a model with carryover effect and another without. The indeterminacy of the Bayes factor due to the arbitrary constant in the improper prior was addressed by assigning a minimally discriminatory value to the constant. In this article, we present an objective Bayesian estimation approach to the two-period crossover design which is also based on a mixture model, but using the commonly recommended Zellner–Siow g-prior. We provide simulation studies and a real data example and compare the numerical results with Grizzle (1965 Grizzle, J.E. (1965). The two-period change-over design and its use in clinical trials. Biometrics 21:467480. See Grizzle (1974) for corrections.[Crossref], [PubMed], [Web of Science ®] [Google Scholar])’s and Grieve (1985 Grieve, A.P. (1985). A Bayesian analysis of the two-period crossover design for clinical trials. Biometrics 41:979990.[Crossref], [PubMed], [Web of Science ®] [Google Scholar])’s approaches.  相似文献   

5.
This article is concerned with the minimax estimation of a scale parameter under the quadratic loss function where the family of densities is location-scale type. We obtain results for the case when the scale parameter is bounded below by a known constant. Implications for the estimation of a lower-bounded scale parameter of an exponential distribution are presented under unknown location. Furthermore, classes of improved minimax estimators are derived for the restricted parameter using the Integral Expression for Risk Difference (IERD) approach of Kubokawa (1994 Kubokawa, T. (1994). A unified approach to improving equivariant estimators. Ann. Stat. 22:290299.[Crossref], [Web of Science ®] [Google Scholar]). These classes are shown to include some existing estimators from literature.  相似文献   

6.
Since the seminal paper of Ghirardato (1997 Ghirardato, P. 1997. On the independence for non-additive measures, with a Fubini theorem. Journal of Economic Theory 73:26191.[Crossref], [Web of Science ®] [Google Scholar]), it is known that Fubini theorem for non additive measures can be available only for functions as “slice-comonotonic” in the framework of product algebra. Later, inspired by Ghirardato (1997 Ghirardato, P. 1997. On the independence for non-additive measures, with a Fubini theorem. Journal of Economic Theory 73:26191.[Crossref], [Web of Science ®] [Google Scholar]), Chateauneuf and Lefort (2008 Chateauneuf, A., and J. P. Lefort. 2008. Some Fubini theorems on product σ-algebras for non-additive measures. International Journal of Approximate Reasoning 48:68696.[Crossref], [Web of Science ®] [Google Scholar]) obtained some Fubini theorems for non additive measures in the framework of product σ-algebra. In this article, we study Fubini theorem for non additive measures in the framework of g-expectation. We give some different assumptions that provide Fubini theorem in the framework of g-expectation.  相似文献   

7.
Credibility formula has been developed in many fields of actuarial sciences. Based upon Payandeh (2010 Payandeh, A.T. (2010). A new approach to the credibility formula. Insur.: Math. Econ. 46(2):334338.[Crossref], [Web of Science ®] [Google Scholar]), this article extends concept of credibility formula to relatively premium of a given rate-making system. More precisely, it calculates Payandeh’s (2010 Payandeh, A.T. (2010). A new approach to the credibility formula. Insur.: Math. Econ. 46(2):334338.[Crossref], [Web of Science ®] [Google Scholar]) credibility factor for zero-inflated Poisson gamma distributions with respect to several loss functions. A comparison study has been given.  相似文献   

8.
This study considers efficient mixture designs for the approximation of the response surface of a quantile regression model, which is a second degree polynomial, by a first degree polynomial in the proportions of q components. Instead of least squares estimation in the traditional regression analysis, the objective function in quantile regression models is a weighted sum of absolute deviations and the least absolute deviations (LAD) estimation technique should be used (Bassett and Koenker, 1982 Bassett, G., Koenker, R. (1982). An empirical quantile function for linear models with i.i.d. errors. Journal of the American Statistical Association 77:407415.[Taylor &; Francis Online], [Web of Science ®] [Google Scholar]; Koenker and Bassett, 1978 Koenker, R., Bassett, G. (1978). Regression quantiles. Econometrica 46(1):3350.[Crossref], [Web of Science ®] [Google Scholar]). Therefore, the standard optimal mixture designs like the D-optimal or A-optimal mixture designs for the least squared estimation are not appropriate. This study explores mixture designs that minimize the bias between the approximated 1st-degree polynomial and a 2nd-degree polynomial response surfaces by the LAD estimation. In contrast to the standard optimal mixture designs for the least squared estimation, the efficient designs might contain elementary centroid design points of degrees higher than two. An example of a portfolio with five assets is given to illustrate the proposed efficient mixture designs in determining the marginal contribution of risks by individual assets in the portfolio.  相似文献   

9.
The mean residual life of a life distribution, X, with a finite mean is defined by M(t) = E[X ? t|X > t] for t ? 0. Kochar et al. (2000 Kochar, S.C., Mukerjee, H., Samaniego, F.J. (2000). Estimation of a monotone mean residual life. Ann. Stat. 28: 905921.[Crossref], [Web of Science ®] [Google Scholar]) provided an estimator of M when it is assumed to be decreasing. They showed that its asymptotic distribution was the same as that of the empirical estimate, but only under very stringent analytic and distributional assumptions. We provide a more general asymptotic theory, and under much weaker conditions. We also provide improved asymptotic confidence bands.  相似文献   

10.
The present paper suggests an interesting and useful ramification of the unrelated randomized response model due to Pal and Singh (2012 Pal, S., and S. Singh. 2012. A new unrelated question randomized response model. Statistics 46 (1):99109.[Taylor &; Francis Online], [Web of Science ®] [Google Scholar]) [A new unrelated question randomized response model. Statistics 46 (1), 99–109] that can be used for any sampling scheme. We have shown theoretically and numerically that the proposed model is more efficient than Pal and Singh (2012 Pal, S., and S. Singh. 2012. A new unrelated question randomized response model. Statistics 46 (1):99109.[Taylor &; Francis Online], [Web of Science ®] [Google Scholar]) model.  相似文献   

11.
This article considers several estimators for estimating the ridge parameter k for multinomial logit model based on the work of Khalaf and Shukur (2005 Khalaf, G., and G. Shukur. 2005. Choosing ridge parameters for regression problems. Commun. Statist. Theor. Meth., 34:11771182.[Taylor &; Francis Online], [Web of Science ®] [Google Scholar]), Alkhamisi et al. (2006 Alkhamisi, M., G. Khalaf, and G. Shukur. 2006. Some modifications for choosing ridge parameters. Commun. Statist. Theor. Meth. 35:20052020.[Taylor &; Francis Online], [Web of Science ®] [Google Scholar]), and Muniz et al. (2012 Muniz, G., B. M. G. Kibria, K. Månsson, and G. Shukur. 2012. On developing ridge regression parameters: A graphical investigation. in SORT. 36: 115138.[Web of Science ®] [Google Scholar]). The mean square error (MSE) is considered as the performance criterion. A simulation study has been conducted to compare the performance of the estimators. Based on the simulation study we found that increasing the correlation between the independent variables and the number of regressors has negative effect on the MSE. However, when the sample size increases the MSE decreases even when the correlation between the independent variables is large. Based on the minimum MSE criterion some useful estimators for estimating the ridge parameter k are recommended for the practitioners.  相似文献   

12.
Gupta and Kirmani (2008 Gupta, R.C., Kirmani, S.N.U.A. (2008). Characterization based on convex conditional mean function. J. Stat. Plann Inference. 138:964970.[Crossref], [Web of Science ®] [Google Scholar]) showed that the convex conditional mean function (CCMF) characterizes the distribution function completely. In this paper, we introduce a consistent estimator of CCMF and call it empirical convex conditional mean function (ECCMF). Then we construct a simple consistent test of fit based on the integrated squared difference between ECCMF and CCMF. The theoretical and asymptotic properties of the estimator ECCMF and the proposed test statistic are studied. The performance of the constructed test is investigated under different distributions using simulations.  相似文献   

13.
In this article, we discuss the method of linear kernel quantile estimator proposed by Parzen (1979 Parzen, E. (1979). Nonparametric statistical data modeling. J. Amer. Statist. Assoc. 74:105121.[Taylor &; Francis Online], [Web of Science ®] [Google Scholar]). We establish a Bahadur representation in sense of almost surely convergence with the rate log? αn under the case of S-mixing random variable sequence which was proposed by Berkes (2009 Berkes, I., Hörmann, S., (2009). Asymptotic results for the itpirical process of stationary sequences. Stoch. Process. Their Applic. 119:12981324.[Crossref], [Web of Science ®] [Google Scholar]). We also obtain the strong consistence of this estimator and its convergence rate.  相似文献   

14.
We consider profile analysis with unequal covariance matrices under multivariate normality. In particular, we discuss this problem for high-dimensional data where the dimension is larger than the sample size. We propose three test statistics based on Bennett’s (1951) transformation and the Dempster trace criterion proposed by Dempster (1958 Dempster, A.P. (1958). A high dimensional two samples significance test. Annals of Mathematical Statistics 29:9951010.[Crossref] [Google Scholar]). We derive the null distributions as well as the nonnull distributions of the test statistics. Finally, in order to investigate the accuracy of the proposed statistics, we perform Monte Carlo simulations for some selected values of parameters.  相似文献   

15.
Repeated measurement designs are widely used in medicine, pharmacology, animal sciences, and psychology. In this paper the works of Iqbal and Tahir (2009 Iqbal, I., and M. H. Tahir. 2009. Circular strongly balanced repeated measurements designs. Communications in Statistics—Theory and Methods 38:368696.[Taylor &; Francis Online], [Web of Science ®] [Google Scholar]) and Iqbal, Tahir, and Ghazali (2010 Iqbal, I., M. H. Tahir, and S. S. A. Ghazali. 2010. Circular first- and second-order balanced repeated measurements designs. Communications in Statistics—Theory and Methods 39:22840.[Taylor &; Francis Online], [Web of Science ®] [Google Scholar]) are generalized for the construction of circular-balanced and circular strongly balanced repeated measurements designs through the method of cyclic shifts for three periods.  相似文献   

16.
Cooray and Ananda (2008 Cooray, K., Ananda, M.M.A. (2008). A Generalization of the half-normal distribution with applications to lifetime data. Commun. Stat. - Theory Methods 37:13231337.[Taylor &; Francis Online], [Web of Science ®] [Google Scholar]) pioneered a lifetime model commonly used in reliability studies. Based on this distribution, we propose a new model called the odd log-logistic generalized half-normal distribution for describing fatigue lifetime data. Various of its structural properties are derived. We discuss the method of maximum likelihood to fit the model parameters. For different parameter settings and sample sizes, some simulation studies compare the performance of the new lifetime model. It can be very useful, and its superiority is illustrated by means of a real dataset.  相似文献   

17.
This article introduces a new model called the buffered autoregressive model with generalized autoregressive conditional heteroscedasticity (BAR-GARCH). The proposed model, as an extension of the BAR model in Li et al. (2015 Li, G.D., Guan, B., Li, W.K., and Yu, P. L.H. (2015), “Hysteretic Autoregressive Time Series Models,” Biometrika, 102, 717–723.[Crossref], [PubMed], [Web of Science ®] [Google Scholar]), can capture the buffering phenomena of time series in both the conditional mean and variance. Thus, it provides us a new way to study the nonlinearity of time series. Compared with the existing AR-GARCH and threshold AR-GARCH models, an application to several exchange rates highlights the importance of the BAR-GARCH model.  相似文献   

18.
Filipiak and Markiewicz (2012 Filipiak, K., Markiewicz, A. (2012). On universal optimality of circular weakly neighbor balanced designs under an interference model. Comm. Stat. Theor Methods 41: 23562366.[Taylor &; Francis Online], [Web of Science ®] [Google Scholar]) proved the universal optimality of circular weakly neighbor balanced designs (CWNBDs) under the interference model with fixed neighbor effects among the class of complete block designs. In two special cases where a CWNBD cannot exist, Filipiak et al. (2012 Filipiak, K., Markiewicz, A., Ró?ański, R. (2012). Maximal determinant over a certain class of matrices and its application to D-optimality of designs. Linear Algebra Appl. 436(4): 874887.[Crossref], [Web of Science ®] [Google Scholar]) characterized D-optimal designs. The aim of this paper is to show the universal optimality of CWNBDs and to characterize D-optimal designs under the interference model with random neighbor effects.  相似文献   

19.
The Hosmer–Lemeshow test is a widely used method for evaluating the goodness of fit of logistic regression models. But its power is much influenced by the sample size, like other chi-square tests. Paul, Pennell, and Lemeshow (2013 Paul, P., M. L. Pennell, and S. Lemeshow. 2013. Standardizing the power of the Hosmer–Lemeshow goodness of fit test in large data sets. Statistics in Medicine 32:6780.[Crossref], [PubMed], [Web of Science ®] [Google Scholar]) considered using a large number of groups for large data sets to standardize the power. But simulations show that their method performs poorly for some models. In addition, it does not work when the sample size is larger than 25,000. In the present paper, we propose a modified Hosmer–Lemeshow test that is based on estimation and standardization of the distribution parameter of the Hosmer–Lemeshow statistic. We provide a mathematical derivation for obtaining the critical value and power of our test. Through simulations, we can see that our method satisfactorily standardizes the power of the Hosmer–Lemeshow test. It is especially recommendable for enough large data sets, as the power is rather stable. A bank marketing data set is also analyzed for comparison with existing methods.  相似文献   

20.
The aim of this article is the construction of the test statistic for the detection of changes in vector autoregressive (AR) models where both AR parameters and the variance matrix of the error term are the subjects of a change. The approximating distribution of the proposed statistic is the Gumbel distribution. The proof stands on the approximation of weakly dependent random vectors by independent ones and by application of Horváth’s extension of Darling-Erdös extremal result for random vectors, see Darling and Erdös (1956) Darling, D.A., Erdös, P. (1956). A limit theorem for the maximum of normalized sums of independent random variables. Duke Math. J. 23:143155.[Crossref], [Web of Science ®] [Google Scholar] and Horváth (1993) Horváth, L. (1993). The maximum likelihood method for testing changes in the parameters of normal observations. Ann. Stat. 21(2):671680.[Crossref], [Web of Science ®] [Google Scholar]. The test statistic is a modification of the likelihood ratio.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号