首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
2.
A generalized Holm’s procedure is proposed which can reject several null hypotheses at each step sequentially and also strongly controls the family-wise error rate regardless of the dependence of individual test statistics. The new procedure is more powerful than Holm’s procedure if the number of rejections m and m > 0 is prespecified before the test.  相似文献   

3.
In this paper we review some results that have been derived on record values for some well known probability density functions and based on m records from Kumaraswamy’s distribution we obtain estimators for the two parameters and the future sth record value. These estimates are derived using the maximum likelihood and Bayesian approaches. In the Bayesian approach, the two parameters are assumed to be random variables and estimators for the parameters and for the future sth record value are obtained, when we have observed m past record values, using the well known squared error loss (SEL) function and a linear exponential (LINEX) loss function. The findings are illustrated with actual and computer generated data.  相似文献   

4.
The benefits of adjusting for baseline covariates are not as straightforward with repeated binary responses as with continuous response variables. Therefore, in this study, we compared different methods for analyzing repeated binary data through simulations when the outcome at the study endpoint is of interest. Methods compared included chi‐square, Fisher's exact test, covariate adjusted/unadjusted logistic regression (Adj.logit/Unadj.logit), covariate adjusted/unadjusted generalized estimating equations (Adj.GEE/Unadj.GEE), covariate adjusted/unadjusted generalized linear mixed model (Adj.GLMM/Unadj.GLMM). All these methods preserved the type I error close to the nominal level. Covariate adjusted methods improved power compared with the unadjusted methods because of the increased treatment effect estimates, especially when the correlation between the baseline and outcome was strong, even though there was an apparent increase in standard errors. Results of the Chi‐squared test were identical to those for the unadjusted logistic regression. Fisher's exact test was the most conservative test regarding the type I error rate and also with the lowest power. Without missing data, there was no gain in using a repeated measures approach over a simple logistic regression at the final time point. Analysis of results from five phase III diabetes trials of the same compound was consistent with the simulation findings. Therefore, covariate adjusted analysis is recommended for repeated binary data when the study endpoint is of interest. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

5.
6.
In this article, we propose a family of bounded influence robust estimates for the parametric and non-parametric components of a generalized partially linear mixed model that are subject to censored responses and missing covariates. The asymptotic properties of the proposed estimates have been looked into. The estimates are obtained by using Monte Carlo expectation–maximization algorithm. An approximate method which reduces the computational time to a great extent is also proposed. A simulation study shows that performances of the two approaches are similar in terms of bias and mean square error. The analysis is illustrated through a study on the effect of environmental factors on the phytoplankton cell count.  相似文献   

7.
Abstract

In some clinical, environmental, or economical studies, researchers are interested in a semi-continuous outcome variable which takes the value zero with a discrete probability and has a continuous distribution for the non-zero values. Due to the measuring mechanism, it is not always possible to fully observe some outcomes, and only an upper bound is recorded. We call this left-censored data and observe only the maximum of the outcome and an independent censoring variable, together with an indicator. In this article, we introduce a mixture semi-parametric regression model. We consider a parametric model to investigate the influence of covariates on the discrete probability of the value zero. For the non-zero part of the outcome, a semi-parametric Cox’s regression model is used to study the conditional hazard function. The different parameters in this mixture model are estimated using a likelihood method. Hereby the infinite dimensional baseline hazard function is estimated by a step function. As results, we show the identifiability and the consistency of the estimators for the different parameters in the model. We study the finite sample behaviour of the estimators through a simulation study and illustrate this model on a practical data example.  相似文献   

8.
Partially paired data, either with incompleteness in one or both arms, are common in practice. For testing equality of means of two arms, practitioners often use only the portion of data with complete pairs and perform paired tests. Although such tests (referred as ‘naive paired tests’) are legitimate, their powers might be low as only partial data are utilized. The recently proposed ‘P-value pooling methods’, based on combining P-values from two tests, use all data, have reasonable type-I error control and good power property. While it is generally believed that ‘P-value pooling methods’ are superior to ‘naive paired tests’ in terms of power as the former use more data than the latter, no detailed power comparison has been done. This paper aims to compare powers of ‘naive paired tests’ and ‘P-value pooling methods’ analytically and our findings are counterintuitive, i.e. the ‘P-value pooling methods’ do not always outperform the naive paired tests in terms of power. Based on these results, we present guidance on how to select the best test for testing equality of means with partially paired data.  相似文献   

9.
The procedure suggested by DerSimonian and Laird is the simplest and most commonly used method for fitting the random effects model for meta-analysis. Here it is shown that, unless all studies are of similar size, this is inefficient when estimating the between-study variance, but is remarkably efficient when estimating the treatment effect. If formal inference is restricted to statements about the treatment effect, and the sample size is large, there is little point in implementing more sophisticated methodology. However, it is further demonstrated, for a simple special case, that use of the profile likelihood results in actual coverage probabilities for 95% confidence intervals that are closer to nominal levels for smaller sample sizes. Alternative methods for making inferences for the treatment effect may therefore be preferable if the sample size is small, but the DerSimonian and Laird procedure retains its usefulness for larger samples.  相似文献   

10.
We analyze left-truncated and right-censored (LTRC) data using Aalen’s linear models. The integrated square error (ISE) is used to select an optimal bandwidth of the weighted least-squared estimator. We also consider a semiparametric approach for the case when the distribution of the left-truncated variable is parameterized. A simulation study is conducted to investigate the performance of the proposed estimators. The approaches are illustrated with the data of Stanford heart transplant.  相似文献   

11.
When the outcome of interest is semicontinuous and collected longitudinally, efficient testing can be difficult. Daily rainfall data is an excellent example which we use to illustrate the various challenges. Even under the simplest scenario, the popular ‘two-part model’, which uses correlated random-effects to account for both the semicontinuous and longitudinal characteristics of the data, often requires prohibitively intensive numerical integration and difficult interpretation. Reducing data to binary (truncating continuous positive values to equal one), while relatively straightforward, leads to a potentially substantial loss in power. We propose an alternative: using a non-parametric rank test recently proposed for joint longitudinal survival data. We investigate the potential benefits of such a test for the analysis of semicontinuous longitudinal data with regards to power and computational feasibility.  相似文献   

12.
For stochastic ordering tests for normal distributions there exist two well known types of tests. One of them is based on the maximum likelihood ratio principle, the other is the most stringent somewhere most powerful test of Schaafsma and Smid(for a comprehensive treatment see Robertson, Wright and Dykstra(1988), for the latter test also Shi and Kudo(1987)). All these tests are in general numerically tedious. Wei, Lachin(1984)and particularly Lachin(1992)formulate a simple and easily computable test. However, it is not known so far for which sort of ordered alternatives his test is optimal

In this paper it is shown that his procedure is a maxmin test for reasonable subalternatives, provided the covariance matrix has nonnegative row sums. If this property is violated then his procedure can be altered in such a manner that the resul ting test again is a maxmin test. An example is glven where the modified procedure even in the least favourable case leads to a nontrifling increase in power. The fact that Lachins test resp. the modified version are maxmin tests on appropriate subalternatives amounts to the property that they are maxmin tests on subhypotheses which are relevant in practical applications.  相似文献   

13.
14.
We consider the problem of model selection based on quantile analysis and with unknown parameters estimated using quantile leasts squares. We propose a model selection test for the null hypothesis that the competing models are equivalent against the alternative hypothesis that one model is closer to the true model. We follow with two applications of the proposed model selection test. The first application is in model selection for time series with non-normal innovations. The second application is in model selection in the NoVas method, short for normalizing and variance stabilizing transformation, forecast. A set of simulation results also lends strong support to the results presented in the paper.  相似文献   

15.
Nonparametric estimates of the conditional distribution of a response variable given a covariate are important for data exploration purposes. In this article, we propose a nonparametric estimator of the conditional distribution function in the case where the response variable is subject to interval censoring and double truncation. Using the approach of Dehghan and Duchesne (2011), the proposed method consists in adding weights that depend on the covariate value in the self-consistency equation of Turnbull (1976), which results in a nonparametric estimator. We demonstrate by simulation that the estimator, bootstrap variance estimation and bandwidth selection all perform well in finite samples.  相似文献   

16.
Simple nonparametric estimates of the conditional distribution of a response variable given a covariate are often useful for data exploration purposes or to help with the specification or validation of a parametric or semi-parametric regression model. In this paper we propose such an estimator in the case where the response variable is interval-censored and the covariate is continuous. Our approach consists in adding weights that depend on the covariate value in the self-consistency equation proposed by Turnbull (J R Stat Soc Ser B 38:290–295, 1976), which results in an estimator that is no more difficult to implement than Turnbull’s estimator itself. We show the convergence of our algorithm and that our estimator reduces to the generalized Kaplan–Meier estimator (Beran, Nonparametric regression with randomly censored survival data, 1981) when the data are either complete or right-censored. We demonstrate by simulation that the estimator, bootstrap variance estimation and bandwidth selection (by rule of thumb or cross-validation) all perform well in finite samples. We illustrate the method by applying it to a dataset from a study on the incidence of HIV in a group of female sex workers from Kinshasa.  相似文献   

17.
In the present paper, we introduce and study a class of distributions that has the linear mean residual quantile function. Various distributional properties and reliability characteristics of the class are studied. Some characterizations of the class of distributions are presented. We then present generalizations of this class of distributions using the relationship between various quantile based reliability measures. The method of L-moments is employed to estimate parameters of the class of distributions. Finally, we apply the proposed class of distributions to a real data set.  相似文献   

18.
ABSTRACT

Considerable effort has been spent on the development of confidence intervals for process capability indices (PCIs) based on the sampling distribution of the PCI or the transferred PCI. However, there is still no definitive way to construct a closed interval for a PCI. The aim of this study is to develop closed intervals for the PCIs Cpu, Cpl, and Spk based on Boole's inequality and de Morgan's laws. The relationships between different sample sizes, the significance levels, and the confidence intervals of the PCIs Cpu, Cpl, and Spk are investigated. Then, a testing model for interval estimation for the PCIs Cpu, Cpl, and Spk is built as a powerful tool for measuring the quality performance of a product. Finally, an applied example is given to demonstrate the effectiveness and applicability of the proposed method and the testing model.  相似文献   

19.
Statistical Methods & Applications - Benford’s law became a prevalent concept for fraud and anomaly detection. It examines the frequencies of the leading digits of numbers in a collection...  相似文献   

20.
An expression for Fisher's observed information matrix is given under type I censoring for any location-scale distribution under mild requirements. It is illustrated on a data set which has been analyzed by several authors.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号