首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Repeated measurement designs are widely used in medicine, pharmacology, animal sciences, and psychology. In this paper the works of Iqbal and Tahir (2009 Iqbal, I., and M. H. Tahir. 2009. Circular strongly balanced repeated measurements designs. Communications in Statistics—Theory and Methods 38:368696.[Taylor &; Francis Online], [Web of Science ®] [Google Scholar]) and Iqbal, Tahir, and Ghazali (2010 Iqbal, I., M. H. Tahir, and S. S. A. Ghazali. 2010. Circular first- and second-order balanced repeated measurements designs. Communications in Statistics—Theory and Methods 39:22840.[Taylor &; Francis Online], [Web of Science ®] [Google Scholar]) are generalized for the construction of circular-balanced and circular strongly balanced repeated measurements designs through the method of cyclic shifts for three periods.  相似文献   

2.
Sanaullah et al. (2014 Sanaullah, A., Ali, H.M., Noor ul Amin, M., Hanif, M. (2014). Generalized exponential chain ratio estimators under stratified two-phase random sampling. Appl. Math. Comput. 226:541547.[Crossref], [Web of Science ®] [Google Scholar]) have suggested generalized exponential chain ratio estimators under stratified two-phase sampling scheme for estimating the finite population mean. However, the bias and mean square error (MSE) expressions presented in that work need some corrections, and consequently the study based on efficiency comparison also requires corrections. In this article, we revisit Sanaullah et al. (2014 Sanaullah, A., Ali, H.M., Noor ul Amin, M., Hanif, M. (2014). Generalized exponential chain ratio estimators under stratified two-phase random sampling. Appl. Math. Comput. 226:541547.[Crossref], [Web of Science ®] [Google Scholar]) estimator and provide the correct bias and MSE expressions of their estimator. We also propose an estimator which is more efficient than several competing estimators including the classes of estimators in Sanaullah et al. (2014 Sanaullah, A., Ali, H.M., Noor ul Amin, M., Hanif, M. (2014). Generalized exponential chain ratio estimators under stratified two-phase random sampling. Appl. Math. Comput. 226:541547.[Crossref], [Web of Science ®] [Google Scholar]). Three real datasets are used for efficiency comparisons.  相似文献   

3.
The objective of this paper is to study U-type designs for Bayesian non parametric response surface prediction under correlated errors. The asymptotic Bayes criterion is developed in terms of the asymptotic approach of Mitchell et al. (1994 Mitchell, T., Sacks, J., Ylvisaker, D. (1994). Asymptotic Bayes criteria for nonparametric response surface design. Ann. Stat. 22:634651.[Crossref], [Web of Science ®] [Google Scholar]) for a more general covariance kernel proposed by Chatterjee and Qin (2011 Chatterjee, K., Qin, H. (2011). Generalized discrete discrepancy and its applications in experimental designs. J. Stat. Plann. Inference 141:951960.[Crossref], [Web of Science ®] [Google Scholar]). A relationship between the asymptotic Bayes criterion and other criteria, such as orthogonality and aberration, is then developed. A lower bound for the criterion is also obtained, and numerical results show that this lower bound is tight. The established results generalize those of Yue et al. (2011 Yue, R.X., Qin, H., Chatterjee, K. (2011). Optimal U-type design for Bayesian nonparametric multiresponse prediction. J. Stat. Plann. Inference 141:24722479.[Crossref], [Web of Science ®] [Google Scholar]) from symmetrical case to asymmetrical U-type designs.  相似文献   

4.
Adaptive designs find an important application in the estimation of unknown percentiles for an underlying dose-response curve. A nonparametric adaptive design was suggested by Mugno et al. (2004 Mugno, R.A., Zhus, W., Rosenberger, W.F. (2004). Adaptive urn designs for estimating several percentiles of a dose-response curve. Statist. Med. 23(13):21372150.[Crossref], [PubMed], [Web of Science ®] [Google Scholar]) to simultaneously estimate multiple percentiles of an unknown dose-response curve via generalized Polya urns. In this article, we examine the properties of the design proposed by Mugno et al. (2004 Mugno, R.A., Zhus, W., Rosenberger, W.F. (2004). Adaptive urn designs for estimating several percentiles of a dose-response curve. Statist. Med. 23(13):21372150.[Crossref], [PubMed], [Web of Science ®] [Google Scholar]) when delays in observing responses are encountered. Using simulations, we evaluate a modification of the design under varying group sizes. Our results demonstrate unbiased estimation with minimal loss in efficiency when compared to the original compound urn design.  相似文献   

5.
In a mixture experiment, the response depends on the mixing proportions of the components present in the mixture. Optimum designs are available for the estimation of parameters of the models proposed in such situations. However, these designs are found to include the vertex points of the simplex Ξ defining the experimental region, which are not mixtures in the true sense. Recently, Mandal et al. (2015 Mandal, N.K., Pal, M., Sinha, B.K., and Das, P. (2015). Optimum mixture designs in a restricted region. Stat. Pap. 56(1):105119.[Crossref], [Web of Science ®] [Google Scholar]) derived optimum designs when the experiment is confined to an ellipsoidal region within Ξ, which does not include the vertices of Ξ. In this paper, an attempt has been made to find optimum designs when the experimental region is a simplex or is cuboidal inside Ξ and does not contain the extreme points.  相似文献   

6.
Recently, Koyuncu et al. (2013 Koyuncu, N., Gupta, S., Sousa, R. (2014). Exponential type estimators of the mean of a sensitive variable in the presence of non-sensitive auxiliary information. Communications in Statistics- Simulation and Computation[PubMed], [Web of Science ®] [Google Scholar]) proposed an exponential type estimator to improve the efficiency of mean estimator based on randomized response technique. In this article, we propose an improved exponential type estimator which is more efficient than the Koyuncu et al. (2013 Koyuncu, N., Gupta, S., Sousa, R. (2014). Exponential type estimators of the mean of a sensitive variable in the presence of non-sensitive auxiliary information. Communications in Statistics- Simulation and Computation[PubMed], [Web of Science ®] [Google Scholar]) estimator, which in turn was shown to be more efficient than the usual mean estimator, ratio estimator, regression estimator, and the Gupta et al. (2012 Gupta, S., Shabbir, J., Sousa, R., Corte-Real, P. (2012). Regression estimation of the mean of a sensitive variable in the presence of auxiliary information. Communications in Statistics – Theory and Methods 41:23942404.[Taylor &; Francis Online], [Web of Science ®] [Google Scholar]) estimator. Under simple random sampling without replacement (SRSWOR) scheme, bias and mean square error expressions for the proposed estimator are obtained up to first order of approximation and comparisons are made with the Koyuncu et al. (2013 Koyuncu, N., Gupta, S., Sousa, R. (2014). Exponential type estimators of the mean of a sensitive variable in the presence of non-sensitive auxiliary information. Communications in Statistics- Simulation and Computation[PubMed], [Web of Science ®] [Google Scholar]) estimator. A simulation study is used to observe the performances of these two estimators. Theoretical findings are also supported by a numerical example with real data. We also show how to, extend the proposed estimator to the case when more than one auxiliary variable is available.  相似文献   

7.
This article introduces a new model called the buffered autoregressive model with generalized autoregressive conditional heteroscedasticity (BAR-GARCH). The proposed model, as an extension of the BAR model in Li et al. (2015 Li, G.D., Guan, B., Li, W.K., and Yu, P. L.H. (2015), “Hysteretic Autoregressive Time Series Models,” Biometrika, 102, 717–723.[Crossref], [PubMed], [Web of Science ®] [Google Scholar]), can capture the buffering phenomena of time series in both the conditional mean and variance. Thus, it provides us a new way to study the nonlinearity of time series. Compared with the existing AR-GARCH and threshold AR-GARCH models, an application to several exchange rates highlights the importance of the BAR-GARCH model.  相似文献   

8.
The construction of some wider families of continuous distributions obtained recently has attracted applied statisticians due to the analytical facilities available for easy computation of special functions in programming software. We study some general mathematical properties of the log-gamma-generated (LGG) family defined by Amini, MirMostafaee, and Ahmadi (2014 Amini, M., S. M. T. K. MirMostafaee, and J. Ahmadi. 2014. Log-gamma-generated families of distributions. Statistics 48:91332.[Taylor &; Francis Online], [Web of Science ®] [Google Scholar]). It generalizes the gamma-generated class pioneered by Risti? and Balakrishnan (2012 Risti?, M. M., and N. Balakrishnan. 2012. The gamma exponentiated exponential distribution. Journal of Statistical Computation and Simulation 82:1191206.[Taylor &; Francis Online], [Web of Science ®] [Google Scholar]). We present some of its special models and derive explicit expressions for the ordinary and incomplete moments, generating and quantile functions, mean deviations, Bonferroni and Lorenz curves, Shannon entropy, Rényi entropy, reliability, and order statistics. Models in this family are compared with nested and non nested models. Further, we propose and study a new LGG family regression model. We demonstrate that the new regression model can be applied to censored data since it represents a parametric family of models and therefore can be used more effectively in the analysis of survival data. We prove that the proposed models can provide consistently better fits in some applications to real data sets.  相似文献   

9.
This paper studies the allocations of two non identical active redundancies in series systems in terms of the reversed hazard rate order and hazard rate order, which generalizes some results built in Valdés and Zequeira (2003 Valdés, J. E., and R. I. Zequeira 2003. On the optimal allocation of an active redundancy in a two-component series system. Stat. Probab. Lett. 63:32532.[Crossref], [Web of Science ®] [Google Scholar], 2006 Valdés, J. E., and R. I. Zequeira 2006. On the optimal allocation of two active redundancies in a two-component series system. Oper. Res. Lett. 34:4952.[Crossref], [Web of Science ®] [Google Scholar]).  相似文献   

10.
This paper treats the problem of stochastic comparisons for the extreme order statistics arising from heterogeneous beta distributions. Some sufficient conditions involved in majorization-type partial orders are provided for comparing the extreme order statistics in the sense of various magnitude orderings including the likelihood ratio order, the reversed hazard rate order, the usual stochastic order, and the usual multivariate stochastic order. The results established here strengthen and extend those including Kochar and Xu (2007 Kochar, S.C., Xu, M. (2007). Stochastic comparisons of parallel systems when components have proportional hazard rates. Probab. Eng. Inf. Sci. 21:597609.[Crossref], [Web of Science ®] [Google Scholar]), Mao and Hu (2010 Mao, T., Hu, T. (2010). Equivalent characterizations on orderings of order statistics and sample ranges. Probab. Eng. Inf. Sci. 24:245262.[Crossref], [Web of Science ®] [Google Scholar]), Balakrishnan et al. (2014 Balakrishnan, N., Barmalzan, G., Haidari, A. (2014). On usual multivariate stochastic ordering of order statistics from heterogeneous beta variables. J. Multivariate Anal. 127:147150.[Crossref], [Web of Science ®] [Google Scholar]), and Torrado (2015 Torrado, N. (2015). On magnitude orderings between smallest order statistics from heterogeneous beta distributions. J. Math. Anal. Appl. 426:824838.[Crossref], [Web of Science ®] [Google Scholar]). A real application in system assembly and some numerical examples are also presented to illustrate the theoretical results.  相似文献   

11.
This study considers efficient mixture designs for the approximation of the response surface of a quantile regression model, which is a second degree polynomial, by a first degree polynomial in the proportions of q components. Instead of least squares estimation in the traditional regression analysis, the objective function in quantile regression models is a weighted sum of absolute deviations and the least absolute deviations (LAD) estimation technique should be used (Bassett and Koenker, 1982 Bassett, G., Koenker, R. (1982). An empirical quantile function for linear models with i.i.d. errors. Journal of the American Statistical Association 77:407415.[Taylor &; Francis Online], [Web of Science ®] [Google Scholar]; Koenker and Bassett, 1978 Koenker, R., Bassett, G. (1978). Regression quantiles. Econometrica 46(1):3350.[Crossref], [Web of Science ®] [Google Scholar]). Therefore, the standard optimal mixture designs like the D-optimal or A-optimal mixture designs for the least squared estimation are not appropriate. This study explores mixture designs that minimize the bias between the approximated 1st-degree polynomial and a 2nd-degree polynomial response surfaces by the LAD estimation. In contrast to the standard optimal mixture designs for the least squared estimation, the efficient designs might contain elementary centroid design points of degrees higher than two. An example of a portfolio with five assets is given to illustrate the proposed efficient mixture designs in determining the marginal contribution of risks by individual assets in the portfolio.  相似文献   

12.
Techniques used in variability assessment are subsequently used to draw conclusions regarding the “spread”/uniformity of data curves. Due to the limitations of these techniques, they are not adequate for circumstances where data manifest with multiple peaks. Examples of these manifestations (in three-dimensional space) include under-foot pressure distributions recorded for different types of footwear (Becerro-de-Bengoa-Vallejo et al., 2014 Biau, D.J. (2011). In brief: Standard deviation and standard error. Clinical Orthopaedics and Related Research 469(9):26612664.[Crossref], [PubMed], [Web of Science ®] [Google Scholar]; Cibulka et al., 1994 Cibulka, M.T., Sinacore, D.R., Mueller, M.J. (1994). Shin splints and forefoot contact running: A case report. Journal of Orthopaedic &; Sports Physical Therapy 20(2):98102.[Crossref], [PubMed], [Web of Science ®] [Google Scholar]; Davies et al., 2003 Davies, M.B., Betts, R.P., Scott, I.R. (2003). Optical plantar pressure analysis following internal fixation for displaced intra-articular os calcis fractures. Foot &; Ankle International 24(11):851856.[PubMed], [Web of Science ®] [Google Scholar]), surface textures and interfaces designed to impact friction, and and and molecular surface structures such as viral epitopes (Torras and Garcia-Valls, 2004 Torras, C., Garcia-Valls, R. (2004). Quantification of membrane morphology by interpretation of scanning electron microscopy images. Journal of Membrane Science 233(1–2):119127.[Crossref], [Web of Science ®] [Google Scholar]; Pacejka, 1997; Fustaffson, 1997). This article proposes a technique for generating a single variable – Λ that will quantify the uniformity of such surfaces. We define and validate this technique using several mathematical and graphical models.  相似文献   

13.
In analogy with the weighted Shannon entropy proposed by Belis and Guiasu (1968 Belis, M., Guiasu, S. (1968). A quantitative-qualitative measure of information in cybernetic systems. IEEE Trans. Inf. Th. IT-4:593594.[Crossref], [Web of Science ®] [Google Scholar]) and Guiasu (1986 Guiasu, S. (1986). Grouping data by using the weighted entropy. J. Stat. Plann. Inference 15:6369.[Crossref], [Web of Science ®] [Google Scholar]), we introduce a new information measure called weighted cumulative residual entropy (WCRE). This is based on the cumulative residual entropy (CRE), which is introduced by Rao et al. (2004 Rao, M., Chen, Y., Vemuri, B.C., Wang, F. (2004). Cumulative residual entropy: a new measure of information. IEEE Trans. Info. Theory 50(6):12201228.[Crossref], [Web of Science ®] [Google Scholar]). This new information measure is “length-biased” shift dependent that assigns larger weights to larger values of random variable. The properties of WCRE and a formula relating WCRE and weighted Shannon entropy are given. Related studies of reliability theory is covered. Our results include inequalities and various bounds to the WCRE. Conditional WCRE and some of its properties are discussed. The empirical WCRE is proposed to estimate this new information measure. Finally, strong consistency and central limit theorem are provided.  相似文献   

14.
In recent articles, Fajardo et al. (2009 Fajardo Molinares, F., Reisen, V.A., Cribari-Neto, F. (2009). Robust estimation in long-memory processes under additive outliers. Journal of Statistical Planning and Inference 139:25112525.[Crossref], [Web of Science ®] [Google Scholar]) and Reisen and Fajardo (2012) propose an alternative semiparametric estimator of the fractional parameter in ARFIMA models which is robust to the presence of additive outliers. The results are very interesting, however, they use samples of 300 or 800 observations which are rarely found in macroeconomics. In order to perform a comparison, I estimate the fractional parameter using the procedure of Geweke and Porter-Hudak (1983 Geweke, J., Porter-Hudak, S. (1983). The estimation and application of long memory time series model. Journal of Time Series Analysis 4:221238.[Crossref] [Google Scholar]) augmented with dummy variables associated with the (previously) detected outliers using the statistic τd suggested by Perron and Rodríguez (2003 Perron, P., Rodríguez, G. (2003). Searching for additive outliers in nonstationary time series. Journal of Time Series Analysis 24(2):193220.[Crossref], [Web of Science ®] [Google Scholar]). Comparing with Fajardo et al. (2009 Fajardo Molinares, F., Reisen, V.A., Cribari-Neto, F. (2009). Robust estimation in long-memory processes under additive outliers. Journal of Statistical Planning and Inference 139:25112525.[Crossref], [Web of Science ®] [Google Scholar]) and Reisen and Fajardo (2012), I found better results for the mean and bias of the fractional parameter when T = 100 and the results in terms of the standard deviation and the MSE are very similar. However, for higher sample sizes such as 300 or 800, the robust procedure performs better. Empirical applications for seven monthly Latin-American inflation series with very small sample sizes contaminated by additive outliers are discussed.  相似文献   

15.
Sample size estimation for comparing the rates of change in two-arm repeated measurements has been investigated by many investigators. In contrast, the literature has paid relatively less attention to sample size estimation for studies with multi-arm repeated measurements where the design and data analysis can be more complex than two-arm trials. For continuous outcomes, Jung and Ahn (2004 Jung, S., Ahn, C. (2004). K-sample test and sample size calculation for comparing slopes in data with repeated measurements. Biometrical J. 46(5):554564.[Crossref], [Web of Science ®] [Google Scholar]) and Zhang and Ahn (2013 Zhang, S., Ahn, C. (2013). Sample size calculation for comparing time-averaged responses in k-group repeated measurement studies. Comput. Stat. Data Anal. 58:283291.[Crossref], [PubMed], [Web of Science ®] [Google Scholar]) have presented sample size formulas to compare the rates of change and time-averaged responses in multi-arm trials, using the generalized estimating equation (GEE) approach. To our knowledge, there has been no corresponding development for multi-arm trials with count outcomes. We present a sample size formula for comparing the rates of change in multi-arm repeated count outcomes using the GEE approach that accommodates various correlation structures, missing data patterns, and unbalanced designs. We conduct simulation studies to assess the performance of the proposed sample size formula under a wide range of designing configurations. Simulation results suggest that empirical type I error and power are maintained close to their nominal levels. The proposed method is illustrated using an epileptic clinical trial example.  相似文献   

16.
Recently, Abbasnejad et al. (2010 Abbasnejad, M., Arghami, N.R., Morgenthaler, S., Mohtashami Borzadaran, G.R. (2010). On the dynamic survival entropy. Stat. Probab. Lett. 80:19621971.[Crossref], [Web of Science ®] [Google Scholar]) proposed a measure of uncertainty based on survival function, called the survival entropy of order α. A dynamic form of the survival entropy of order α is also proposed by them. In this paper, we derive the weighted form of these measures. The properties of the new measures are also discussed.  相似文献   

17.
The probability matching prior for linear functions of Poisson parameters is derived. A comparison is made between the confidence intervals obtained by Stamey and Hamilton (2006 Stamey, J., Hamilton, C. (2006). A note on confidence intervals for a linear function of Poisson rates. Commun. Statist. Simul. &; Computat. 35(4):849856.[Taylor &; Francis Online], [Web of Science ®] [Google Scholar]), and the intervals derived by us when using the Jeffreys’ and probability matching priors. The intervals obtained from the Jeffreys’ prior are in some cases fiducial intervals (Krishnamoorthy and Lee, 2010 Krishnamoorthy, K., Lee, M. (2010). Inference for functions of parameters in discrete distributions based on fiducial approach: Binomial and Poisson cases. J. Statist. Plann. Infere. 140(5):11821192.[Crossref], [Web of Science ®] [Google Scholar]). A weighted Monte Carlo method is used for the probability matching prior. The power and size of the test, using Bayesian methods, is compared to tests used by Krishnamoorthy and Thomson (2004 Krishnamoorthy, K., Thomson, J. (2004). A more powerful test for comparing two Poisson means. J. Statist. Plann. Infere. 119(1):2335.[Crossref], [Web of Science ®] [Google Scholar]). The Jeffreys’, probability matching and two other priors are used.  相似文献   

18.
By using the medical data analyzed by Kang et al. (2007 Kang, C.W., Lee, M.S., Seong, Y.J., Hawkins, D.M. (2007). A control chart for the coefficient of variation. J. Qual. Technol. 39(2):151158.[Taylor &; Francis Online], [Web of Science ®] [Google Scholar]), a Bayesian procedure is applied to obtain control limits for the coefficient of variation. Reference and probability matching priors are derived for a common coefficient of variation across the range of sample values. By simulating the posterior predictive density function of a future coefficient of variation, it is shown that the control limits are effectively identical to those obtained by Kang et al. (2007 Kang, C.W., Lee, M.S., Seong, Y.J., Hawkins, D.M. (2007). A control chart for the coefficient of variation. J. Qual. Technol. 39(2):151158.[Taylor &; Francis Online], [Web of Science ®] [Google Scholar]) for the specific dataset they used. This article illustrates the flexibility and unique features of the Bayesian simulation method for obtaining posterior distributions, predictive intervals, and run-lengths in the case of the coefficient of variation. A simulation study shows that the 95% Bayesian confidence intervals for the coefficient of variation have the correct frequentist coverage.  相似文献   

19.
In this article, we study the complete convergence for sequences of coordinatewise asymptotically negatively associated random vectors in Hilbert spaces. We also investigate that some related results for coordinatewise negatively associated random vectors in Huan, Quang, and Thuan (2014 Huan, N. V., N. V. Quang, and N. T. Thuan. 2014. Baum–Katz type theorems for coordinatewise negatively associated random vectors in Hilbert spaces. Acta Mathematica Hungarica 144(1):132419.[Crossref], [Web of Science ®] [Google Scholar]) still hold under this concept.  相似文献   

20.
We revisit the generalized midpoint frequency polygons of Scott (1985), and the edge frequency polygons of Jones et al. (1998 Jones, M.C., Samiuddin, M., Al-Harbey, A.H., Maatouk, T. A.H. (1998). The edge frequency polygon. Biometrika 85:235239.[Crossref], [Web of Science ®] [Google Scholar]) and Dong and Zheng (2001 Dong, J.P., Zheng, C. (2001). Generalized edge frequency polygon for density estimation. Statist. Probab. Lett. 55:137145.[Crossref], [Web of Science ®] [Google Scholar]). Their estimators are linear interpolants of the appropriate values above the bin centers or edges, those values being weighted averages of the heights of r, rN, neighboring histogram bins. We propose a simple kernel evaluation method to generate weights for binned values. The proposed kernel method can provide near-optimal weights in the sense of minimizing asymptotic mean integrated square error. In addition, we prove that the discrete uniform weights minimize the variance of the generalized frequency polygon under some mild conditions. Analogous results are obtained for the generalized frequency polygon based on linearly prebinned data. Finally, we use two examples and a simulation study to compare the generalized midpoint and edge frequency polygons.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号