共查询到20条相似文献,搜索用时 218 毫秒
1.
Two-period crossover design is one of the commonly used designs in clinical trials. But, the estimation of treatment effect is complicated by the possible presence of carryover effect. It is known that ignoring the carryover effect when it exists can lead to poor estimates of the treatment effect. The classical approach by Grizzle (1965) consists of two stages. First, a preliminary test is conducted on carryover effect. If the carryover effect is significant, analysis is based only on data from period one; otherwise, analysis is based on data from both periods. A Bayesian approach with improper priors was proposed by Grieve (1985) which uses a mixture of two models: a model with carryover effect and another without. The indeterminacy of the Bayes factor due to the arbitrary constant in the improper prior was addressed by assigning a minimally discriminatory value to the constant. In this article, we present an objective Bayesian estimation approach to the two-period crossover design which is also based on a mixture model, but using the commonly recommended Zellner–Siow g-prior. We provide simulation studies and a real data example and compare the numerical results with Grizzle (1965)’s and Grieve (1985)’s approaches. 相似文献
2.
In this article, we discuss the method of linear kernel quantile estimator proposed by Parzen (1979). We establish a Bahadur representation in sense of almost surely convergence with the rate log? αn under the case of S-mixing random variable sequence which was proposed by Berkes (2009). We also obtain the strong consistence of this estimator and its convergence rate. 相似文献
3.
Qunying Wu 《统计学通讯:理论与方法》2017,46(8):3667-3675
Let X1, X2, … be a sequence of stationary standardized Gaussian random fields. The almost sure limit theorem for the maxima of stationary Gaussian random fields is established. Our results extend and improve the results in Csáki and Gonchigdanzan (2002) and Choi (2010). 相似文献
4.
Buffered Autoregressive Models With Conditional Heteroscedasticity: An Application to Exchange Rates
This article introduces a new model called the buffered autoregressive model with generalized autoregressive conditional heteroscedasticity (BAR-GARCH). The proposed model, as an extension of the BAR model in Li et al. (2015), can capture the buffering phenomena of time series in both the conditional mean and variance. Thus, it provides us a new way to study the nonlinearity of time series. Compared with the existing AR-GARCH and threshold AR-GARCH models, an application to several exchange rates highlights the importance of the BAR-GARCH model. 相似文献
5.
Pao-Sheng Shen 《统计学通讯:理论与方法》2017,46(4):1916-1926
The complication in analyzing tumor data is that the tumors detected in a screening program tend to be slowly progressive tumors, which is the so-called left-truncated sampling that is inherent in screening studies. Under the assumption that all subjects have the same tumor growth function, Ghosh (2008) developed estimation procedures for the Cox proportional hazards model. Shen (2011a) demonstrated that Ghosh (2008)'s approach can be extended to the case when each subject has a specific growth function. In this article, under linear transformation model, we present a general framework to the analysis of data from cancer screening studies. We developed estimation procedures under linear transformation model, which includes Cox's model as a special case. A simulation study is conducted to demonstrate the potential usefulness of the proposed estimators. 相似文献
6.
The probability matching prior for linear functions of Poisson parameters is derived. A comparison is made between the confidence intervals obtained by Stamey and Hamilton (2006), and the intervals derived by us when using the Jeffreys’ and probability matching priors. The intervals obtained from the Jeffreys’ prior are in some cases fiducial intervals (Krishnamoorthy and Lee, 2010). A weighted Monte Carlo method is used for the probability matching prior. The power and size of the test, using Bayesian methods, is compared to tests used by Krishnamoorthy and Thomson (2004). The Jeffreys’, probability matching and two other priors are used. 相似文献
7.
Frailty models are used in the survival analysis to account for the unobserved heterogeneity in individual risks to disease and death. To analyze the bivariate data on related survival times (e.g., matched pairs experiments, twin, or family data), the shared frailty models were suggested. These models are based on the assumption that frailty acts multiplicatively to hazard rate. In this article, we assume that frailty acts additively to hazard rate. We introduce the shared inverse Gaussian frailty models with three different baseline distributions, namely the generalized log-logistic, the generalized Weibull, and exponential power distribution. We introduce the Bayesian estimation procedure using Markov chain Monte Carlo technique to estimate the parameters involved in these models. We apply these models to a real-life bivariate survival dataset of McGilchrist and Aisbett (1991) related to the kidney infection data, and a better model is suggested for the data. 相似文献
8.
Minimax estimators for the lower-bounded scale parameter of a location-scale family of distributions
This article is concerned with the minimax estimation of a scale parameter under the quadratic loss function where the family of densities is location-scale type. We obtain results for the case when the scale parameter is bounded below by a known constant. Implications for the estimation of a lower-bounded scale parameter of an exponential distribution are presented under unknown location. Furthermore, classes of improved minimax estimators are derived for the restricted parameter using the Integral Expression for Risk Difference (IERD) approach of Kubokawa (1994). These classes are shown to include some existing estimators from literature. 相似文献
9.
This paper proposes a bivariate version of the univariate discrete generalized geometric distribution considered by Gómez–Déniz (2010). The proposed bivariate distribution can have a positive or negative correlation coefficient which can be used for modeling bivariate-dependent count data. After discussing some of its properties, maximum likelihood estimation is discussed. Two illustrative examples are given for fitting and demonstrating the usefulness of the new bivariate distribution proposed here. 相似文献
10.
Since the seminal paper of Ghirardato (1997), it is known that Fubini theorem for non additive measures can be available only for functions as “slice-comonotonic” in the framework of product algebra. Later, inspired by Ghirardato (1997), Chateauneuf and Lefort (2008) obtained some Fubini theorems for non additive measures in the framework of product σ-algebra. In this article, we study Fubini theorem for non additive measures in the framework of g-expectation. We give some different assumptions that provide Fubini theorem in the framework of g-expectation. 相似文献
11.
Amir T. Payandeh Najafabadi Fatemeh Atatalab Maryam Omidi Najafabadi 《统计学通讯:理论与方法》2017,46(1):415-426
Credibility formula has been developed in many fields of actuarial sciences. Based upon Payandeh (2010), this article extends concept of credibility formula to relatively premium of a given rate-making system. More precisely, it calculates Payandeh’s (2010) credibility factor for zero-inflated Poisson gamma distributions with respect to several loss functions. A comparison study has been given. 相似文献
12.
This article proposes new symmetric and asymmetric distributions applying methods analogous as the ones in Kim (2005) and Arnold et al. (2009) to the exponentiated normal distribution studied in Durrans (1992), that we call the power-normal (PN) distribution. The proposed bimodal extension, the main focus of the paper, is called the bimodal power-normal model and is denoted by BPN(α) model, where α is the asymmetry parameter. The authors give some properties including moments and maximum likelihood estimation. Two important features of the model proposed is that its normalizing constant has closed and simple form and that the Fisher information matrix is nonsingular, guaranteeing large sample properties of the maximum likelihood estimators. Finally, simulation studies and real applications reveal that the proposed model can perform well in both situations. 相似文献
13.
Vinicius Fernando Calsavara Agatha Sacramento Rodrigues Vera Lúcia Damasceno Tomazella Mário de Castro 《统计学通讯:理论与方法》2017,46(19):9763-9776
In this article, we propose a flexible cure rate model, which is an extension of Cancho et al. (2011) model, by incorporating a power variance function (PVF) frailty term in latent risk. The model is more flexible in terms of dispersion and it also quantifies the unobservable heterogeneity. The parameter estimation is reached by maximum likelihood estimation procedure and Monte Carlo simulation studies are considered to evaluate the proposed model performance. The practical relevance of the model is illustrated in a real data set of preventing cancer recurrence. 相似文献
14.
This study considers efficient mixture designs for the approximation of the response surface of a quantile regression model, which is a second degree polynomial, by a first degree polynomial in the proportions of q components. Instead of least squares estimation in the traditional regression analysis, the objective function in quantile regression models is a weighted sum of absolute deviations and the least absolute deviations (LAD) estimation technique should be used (Bassett and Koenker, 1982; Koenker and Bassett, 1978). Therefore, the standard optimal mixture designs like the D-optimal or A-optimal mixture designs for the least squared estimation are not appropriate. This study explores mixture designs that minimize the bias between the approximated 1st-degree polynomial and a 2nd-degree polynomial response surfaces by the LAD estimation. In contrast to the standard optimal mixture designs for the least squared estimation, the efficient designs might contain elementary centroid design points of degrees higher than two. An example of a portfolio with five assets is given to illustrate the proposed efficient mixture designs in determining the marginal contribution of risks by individual assets in the portfolio. 相似文献
15.
Edgardo Lorenzo 《统计学通讯:理论与方法》2014,43(21):4514-4518
The mean residual life of a life distribution, X, with a finite mean is defined by M(t) = E[X ? t|X > t] for t ? 0. Kochar et al. (2000) provided an estimator of M when it is assumed to be decreasing. They showed that its asymptotic distribution was the same as that of the empirical estimate, but only under very stringent analytic and distributional assumptions. We provide a more general asymptotic theory, and under much weaker conditions. We also provide improved asymptotic confidence bands. 相似文献
16.
17.
Housila P. Singh 《统计学通讯:理论与方法》2017,46(24):12059-12074
The present paper suggests an interesting and useful ramification of the unrelated randomized response model due to Pal and Singh (2012) [A new unrelated question randomized response model. Statistics 46 (1), 99–109] that can be used for any sampling scheme. We have shown theoretically and numerically that the proposed model is more efficient than Pal and Singh (2012) model. 相似文献
18.
This article considers several estimators for estimating the ridge parameter k for multinomial logit model based on the work of Khalaf and Shukur (2005), Alkhamisi et al. (2006), and Muniz et al. (2012). The mean square error (MSE) is considered as the performance criterion. A simulation study has been conducted to compare the performance of the estimators. Based on the simulation study we found that increasing the correlation between the independent variables and the number of regressors has negative effect on the MSE. However, when the sample size increases the MSE decreases even when the correlation between the independent variables is large. Based on the minimum MSE criterion some useful estimators for estimating the ridge parameter k are recommended for the practitioners. 相似文献
19.
In analogy with the weighted Shannon entropy proposed by Belis and Guiasu (1968) and Guiasu (1986), we introduce a new information measure called weighted cumulative residual entropy (WCRE). This is based on the cumulative residual entropy (CRE), which is introduced by Rao et al. (2004). This new information measure is “length-biased” shift dependent that assigns larger weights to larger values of random variable. The properties of WCRE and a formula relating WCRE and weighted Shannon entropy are given. Related studies of reliability theory is covered. Our results include inequalities and various bounds to the WCRE. Conditional WCRE and some of its properties are discussed. The empirical WCRE is proposed to estimate this new information measure. Finally, strong consistency and central limit theorem are provided. 相似文献
20.
Recently, Koyuncu et al. (2013) proposed an exponential type estimator to improve the efficiency of mean estimator based on randomized response technique. In this article, we propose an improved exponential type estimator which is more efficient than the Koyuncu et al. (2013) estimator, which in turn was shown to be more efficient than the usual mean estimator, ratio estimator, regression estimator, and the Gupta et al. (2012) estimator. Under simple random sampling without replacement (SRSWOR) scheme, bias and mean square error expressions for the proposed estimator are obtained up to first order of approximation and comparisons are made with the Koyuncu et al. (2013) estimator. A simulation study is used to observe the performances of these two estimators. Theoretical findings are also supported by a numerical example with real data. We also show how to, extend the proposed estimator to the case when more than one auxiliary variable is available. 相似文献