共查询到20条相似文献,搜索用时 31 毫秒
1.
Two-period crossover design is one of the commonly used designs in clinical trials. But, the estimation of treatment effect is complicated by the possible presence of carryover effect. It is known that ignoring the carryover effect when it exists can lead to poor estimates of the treatment effect. The classical approach by Grizzle (1965) consists of two stages. First, a preliminary test is conducted on carryover effect. If the carryover effect is significant, analysis is based only on data from period one; otherwise, analysis is based on data from both periods. A Bayesian approach with improper priors was proposed by Grieve (1985) which uses a mixture of two models: a model with carryover effect and another without. The indeterminacy of the Bayes factor due to the arbitrary constant in the improper prior was addressed by assigning a minimally discriminatory value to the constant. In this article, we present an objective Bayesian estimation approach to the two-period crossover design which is also based on a mixture model, but using the commonly recommended Zellner–Siow g-prior. We provide simulation studies and a real data example and compare the numerical results with Grizzle (1965)’s and Grieve (1985)’s approaches. 相似文献
2.
The Hosmer–Lemeshow test is a widely used method for evaluating the goodness of fit of logistic regression models. But its power is much influenced by the sample size, like other chi-square tests. Paul, Pennell, and Lemeshow (2013) considered using a large number of groups for large data sets to standardize the power. But simulations show that their method performs poorly for some models. In addition, it does not work when the sample size is larger than 25,000. In the present paper, we propose a modified Hosmer–Lemeshow test that is based on estimation and standardization of the distribution parameter of the Hosmer–Lemeshow statistic. We provide a mathematical derivation for obtaining the critical value and power of our test. Through simulations, we can see that our method satisfactorily standardizes the power of the Hosmer–Lemeshow test. It is especially recommendable for enough large data sets, as the power is rather stable. A bank marketing data set is also analyzed for comparison with existing methods. 相似文献
3.
Pao-Sheng Shen 《统计学通讯:理论与方法》2017,46(4):1916-1926
The complication in analyzing tumor data is that the tumors detected in a screening program tend to be slowly progressive tumors, which is the so-called left-truncated sampling that is inherent in screening studies. Under the assumption that all subjects have the same tumor growth function, Ghosh (2008) developed estimation procedures for the Cox proportional hazards model. Shen (2011a) demonstrated that Ghosh (2008)'s approach can be extended to the case when each subject has a specific growth function. In this article, under linear transformation model, we present a general framework to the analysis of data from cancer screening studies. We developed estimation procedures under linear transformation model, which includes Cox's model as a special case. A simulation study is conducted to demonstrate the potential usefulness of the proposed estimators. 相似文献
4.
The probability matching prior for linear functions of Poisson parameters is derived. A comparison is made between the confidence intervals obtained by Stamey and Hamilton (2006), and the intervals derived by us when using the Jeffreys’ and probability matching priors. The intervals obtained from the Jeffreys’ prior are in some cases fiducial intervals (Krishnamoorthy and Lee, 2010). A weighted Monte Carlo method is used for the probability matching prior. The power and size of the test, using Bayesian methods, is compared to tests used by Krishnamoorthy and Thomson (2004). The Jeffreys’, probability matching and two other priors are used. 相似文献
5.
Sample size estimation for comparing the rates of change in two-arm repeated measurements has been investigated by many investigators. In contrast, the literature has paid relatively less attention to sample size estimation for studies with multi-arm repeated measurements where the design and data analysis can be more complex than two-arm trials. For continuous outcomes, Jung and Ahn (2004) and Zhang and Ahn (2013) have presented sample size formulas to compare the rates of change and time-averaged responses in multi-arm trials, using the generalized estimating equation (GEE) approach. To our knowledge, there has been no corresponding development for multi-arm trials with count outcomes. We present a sample size formula for comparing the rates of change in multi-arm repeated count outcomes using the GEE approach that accommodates various correlation structures, missing data patterns, and unbalanced designs. We conduct simulation studies to assess the performance of the proposed sample size formula under a wide range of designing configurations. Simulation results suggest that empirical type I error and power are maintained close to their nominal levels. The proposed method is illustrated using an epileptic clinical trial example. 相似文献
6.
T. Imada 《统计学通讯:理论与方法》2017,46(7):3186-3199
In this study we discuss multiple comparison procedures for checking differences among a sequence of normal means with ordered restriction. Lee and Spurrier (1995) proposed a multiple comparison procedure which tests the difference between two adjacent means using the difference of sample means. In this study we propose a multiple comparison procedure modifying Lee and Spurrier's (1995) procedure using isotonic regression estimators instead of sample means. We determine the critical value for pairwise comparisons for a specified significance level. Furthermore, we formulate the power of the test. Finally, we give some numerical examples regarding critical values and power of the test intended to compare our procedure with Lee and Spurrier's (1995) procedure. 相似文献
7.
8.
This paper treats the problem of stochastic comparisons for the extreme order statistics arising from heterogeneous beta distributions. Some sufficient conditions involved in majorization-type partial orders are provided for comparing the extreme order statistics in the sense of various magnitude orderings including the likelihood ratio order, the reversed hazard rate order, the usual stochastic order, and the usual multivariate stochastic order. The results established here strengthen and extend those including Kochar and Xu (2007), Mao and Hu (2010), Balakrishnan et al. (2014), and Torrado (2015). A real application in system assembly and some numerical examples are also presented to illustrate the theoretical results. 相似文献
9.
Since the seminal paper of Ghirardato (1997), it is known that Fubini theorem for non additive measures can be available only for functions as “slice-comonotonic” in the framework of product algebra. Later, inspired by Ghirardato (1997), Chateauneuf and Lefort (2008) obtained some Fubini theorems for non additive measures in the framework of product σ-algebra. In this article, we study Fubini theorem for non additive measures in the framework of g-expectation. We give some different assumptions that provide Fubini theorem in the framework of g-expectation. 相似文献
10.
This article proposes new symmetric and asymmetric distributions applying methods analogous as the ones in Kim (2005) and Arnold et al. (2009) to the exponentiated normal distribution studied in Durrans (1992), that we call the power-normal (PN) distribution. The proposed bimodal extension, the main focus of the paper, is called the bimodal power-normal model and is denoted by BPN(α) model, where α is the asymmetry parameter. The authors give some properties including moments and maximum likelihood estimation. Two important features of the model proposed is that its normalizing constant has closed and simple form and that the Fisher information matrix is nonsingular, guaranteeing large sample properties of the maximum likelihood estimators. Finally, simulation studies and real applications reveal that the proposed model can perform well in both situations. 相似文献
11.
Techniques used in variability assessment are subsequently used to draw conclusions regarding the “spread”/uniformity of data curves. Due to the limitations of these techniques, they are not adequate for circumstances where data manifest with multiple peaks. Examples of these manifestations (in three-dimensional space) include under-foot pressure distributions recorded for different types of footwear (Becerro-de-Bengoa-Vallejo et al., 2014; Cibulka et al., 1994; Davies et al., 2003), surface textures and interfaces designed to impact friction, and and and molecular surface structures such as viral epitopes (Torras and Garcia-Valls, 2004; Pacejka, 1997; Fustaffson, 1997). This article proposes a technique for generating a single variable – Λ that will quantify the uniformity of such surfaces. We define and validate this technique using several mathematical and graphical models. 相似文献
12.
Amir T. Payandeh Najafabadi Fatemeh Atatalab Maryam Omidi Najafabadi 《统计学通讯:理论与方法》2017,46(1):415-426
Credibility formula has been developed in many fields of actuarial sciences. Based upon Payandeh (2010), this article extends concept of credibility formula to relatively premium of a given rate-making system. More precisely, it calculates Payandeh’s (2010) credibility factor for zero-inflated Poisson gamma distributions with respect to several loss functions. A comparison study has been given. 相似文献
13.
Haifeng Xu 《统计学通讯:理论与方法》2017,46(7):3123-3134
In this article, assuming that the error terms follow a multivariate t distribution,we derive the exact formulae forthe moments of the heterogeneous preliminary test (HPT) estimator proposed by Xu (2012b). We also execute the numerical evaluation to investigate the mean squared error (MSE) performance of the HPT estimator and compare it with those of the feasible ridge regression (FRR) estimator and the usual ordinary least squared (OLS) estimator. Further, we derive the optimal critical values of the preliminary F test for the HPT estimator, using the minimax regret function proposed by Sawa and Hiromatsu (1973). Our results show that (1) the optimal significance level (α*) increases as the degrees of freedom of multivariate t distribution (ν0) increases; (2) when ν0 ? 10, the value of α* is close to that in the normal error case. 相似文献
14.
It is common to test the null hypothesis that two samples were drawn from identical distributions; and the Smirnov (sometimes called Kolmogorov–Smirnov) test is conventionally applied. We present simulation results to compare the performance of this test with three recently introduced alternatives. We consider both continuous and discrete data. We show that the alternative methods preserve type I error at the nominal level as well as the Smirnov test but offer superior power. We argue for the routine replacement of the Smirnov test with the modified Baumgartner test according to Murakami (2006), or with the test proposed by Zhang (2006). 相似文献
15.
Qunying Wu 《统计学通讯:理论与方法》2017,46(8):3667-3675
Let X1, X2, … be a sequence of stationary standardized Gaussian random fields. The almost sure limit theorem for the maxima of stationary Gaussian random fields is established. Our results extend and improve the results in Csáki and Gonchigdanzan (2002) and Choi (2010). 相似文献
16.
This article recasts the optimal allocations of coverage limits for two independent random losses. Under some regularity conditions on the two concerned probability density functions, we build the sufficient and necessary condition for the existence of the optimal allocation of coverage limits, and derive the optimal allocation whenever they do exist. The results supplement Lu and Meng (2011, Proposition 5.2) and Hu and Wang (2014, Theorem 5.1). 相似文献
17.
Yan Fan 《Journal of applied statistics》2016,43(14):2595-2607
Competing models arise naturally in many research fields, such as survival analysis and economics, when the same phenomenon of interest is explained by different researcher using different theories or according to different experiences. The model selection problem is therefore remarkably important because of its great importance to the subsequent inference; Inference under a misspecified or inappropriate model will be risky. Existing model selection tests such as Vuong's tests [26] and Shi's non-degenerate tests [21] suffer from the variance estimation and the departure of the normality of the likelihood ratios. To circumvent these dilemmas, we propose in this paper an empirical likelihood ratio (ELR) tests for model selection. Following Shi [21], a bias correction method is proposed for the ELR tests to enhance its performance. A simulation study and a real-data analysis are provided to illustrate the performance of the proposed ELR tests. 相似文献
18.
In this article, we discuss the method of linear kernel quantile estimator proposed by Parzen (1979). We establish a Bahadur representation in sense of almost surely convergence with the rate log? αn under the case of S-mixing random variable sequence which was proposed by Berkes (2009). We also obtain the strong consistence of this estimator and its convergence rate. 相似文献
19.
This article proposes a new likelihood-based panel cointegration rank test which extends the test of Örsal and Droge (2014) (henceforth panel SL test) to dependent panels. The dependence is modelled by unobserved common factors which affect the variables in each cross-section through heterogeneous loadings. The data are defactored following the panel analysis of nonstationarity in idiosyncratic and common components (PANIC) approach of Bai and Ng (2004) and the cointegrating rank of the defactored data is then tested by the panel SL test. A Monte Carlo study demonstrates that the proposed testing procedure has reasonable size and power properties in finite samples. 相似文献
20.
In analogy with the weighted Shannon entropy proposed by Belis and Guiasu (1968) and Guiasu (1986), we introduce a new information measure called weighted cumulative residual entropy (WCRE). This is based on the cumulative residual entropy (CRE), which is introduced by Rao et al. (2004). This new information measure is “length-biased” shift dependent that assigns larger weights to larger values of random variable. The properties of WCRE and a formula relating WCRE and weighted Shannon entropy are given. Related studies of reliability theory is covered. Our results include inequalities and various bounds to the WCRE. Conditional WCRE and some of its properties are discussed. The empirical WCRE is proposed to estimate this new information measure. Finally, strong consistency and central limit theorem are provided. 相似文献