首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 826 毫秒
1.
Assuming that the frequency of occurrence follows the Poisson distribution, we develop sample size calculation procedures for testing equality based on an exact test procedure and an asymptotic test procedure under an AB/BA crossover design. We employ Monte Carlo simulation to demonstrate the use of these sample size formulae and evaluate the accuracy of sample size calculation formula derived from the asymptotic test procedure with respect to power in a variety of situations. We note that when both the relative treatment effect of interest and the underlying intraclass correlation between frequencies within patients are large, the sample size calculation based on the asymptotic test procedure can lose accuracy. In this case, the sample size calculation procedure based on the exact test is recommended. On the other hand, if the relative treatment effect of interest is small, the minimum required number of patients per group will be large, and the asymptotic test procedure will be valid for use. In this case, we may consider use of the sample size calculation formula derived from the asymptotic test procedure to reduce the number of patients needed for the exact test procedure. We include an example regarding a double‐blind randomized crossover trial comparing salmeterol with a placebo in exacerbations of asthma to illustrate the practical use of these sample size formulae. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

2.
When there are more than two treatments under comparison, we may consider the use of the incomplete block crossover design (IBCD) to save the number of patients needed for a parallel groups design and reduce the duration of a crossover trial. We develop an asymptotic procedure for simultaneously testing equality of two treatments versus a control treatment (or placebo) in frequency data under the IBCD with two periods. We derive a sample size calculation procedure for the desired power of detecting the given treatment effects at a nominal-level and suggest a simple ad hoc adjustment procedure to improve the accuracy of the sample size determination when the resulting minimum required number of patients is not large. We employ Monte Carlo simulation to evaluate the finite-sample performance of the proposed test, the accuracy of the sample size calculation procedure, and that with the simple ad hoc adjustment suggested here. We use the data taken as a part of a crossover trial comparing the number of exacerbations between using salbutamol or salmeterol and a placebo in asthma patients to illustrate the sample size calculation procedure.  相似文献   

3.
The central limit theorem says that, provided an estimator fulfills certain weak conditions, then, for reasonable sample sizes, the sampling distribution of the estimator converges to normality. We propose a procedure to find out what a “reasonably large sample size” is. The procedure is based on the properties of Gini's mean difference decomposition. We show the results of implementations of the procedure from simulated datasets and data from the German Socio-economic Panel.  相似文献   

4.
The purpose of our study is to propose a. procedure for determining the sample size at each stage of the repeated group significance, tests intended to compare the efficacy of two treatments when a response variable is normal. It is necessary to devise a procedure for reducing the maximum sample size because a large number of sample size are often used in group sequential test. In order to reduce the sample size at each stage, we construct the repeated confidence boundaries which enable us to find which of the two treatments is the more effective at an early stage. Thus we use the recursive formulae of numerical integrations to determine the sample size at the intermediate stage. We compare our procedure with Pocock's in terms of maximum sample size and average sample size in the simulations.  相似文献   

5.
We present a simple but effective procedure for determining whether a reasonably large sample comes from a stable population against the alternative that it comes from a population with finite higher moments. The procedure uses the fact that a stable population sample has moments of the fourth and sixth order whose magnitudes increase very rapidly as the sample size increases. This procedure shows convincingly that stock returns, when taken as a group, do not come from stable populations. Even for individual stocks, our results show that the stable-population-model null hypothesis can be rejected for more than 95% of the stocks.  相似文献   

6.
In this paper we consider the problem of unbiased estimation of the distribution function of an exponential population using order statistics based on a random sample. We present a (unique) unbiased estimator based on a single, say ith, order statistic and study some properties of the estimator for i = 2. We also indicate how this estimator can be utilized to obtain unbiased estimators when a few selected order statistics are available as well as when the sample is selected following an alternative sampling procedure known as ranked set sampling. It is further proved that for a ranked set sample of size two, the proposed estimator is uniformly better than the conventional nonparametric unbiased estimator, further, for a general sample size, a modified ranked set sampling procedure provides an unbiased estimator uniformly better than the conventional nonparametric unbiased estimator based on the usual ranked set sampling procedure.  相似文献   

7.
We investigate a sequential procedure for comparing two treatments in a binomial clinical trial. The procedure uses play-the-winner sampling with termination as soon as the absolute difference in the number of successes of the two treatments reaches a critical value. The important aspect of our procedure is that the critical value is modified as the experiment progresses. Numerical results are given which show that this procedure is preferred to all other existing procedures on the basis of the sample size on the poorer treatment and also on the basis of total sample size.  相似文献   

8.
Ultrahigh dimensional data with both categorical responses and categorical covariates are frequently encountered in the analysis of big data, for which feature screening has become an indispensable statistical tool. We propose a Pearson chi-square based feature screening procedure for categorical response with ultrahigh dimensional categorical covariates. The proposed procedure can be directly applied for detection of important interaction effects. We further show that the proposed procedure possesses screening consistency property in the terminology of Fan and Lv (2008). We investigate the finite sample performance of the proposed procedure by Monte Carlo simulation studies and illustrate the proposed method by two empirical datasets.  相似文献   

9.

Engineers who conduct reliability tests need to choose the sample size when designing a test plan. The model parameters and quantiles are the typical quantities of interest. The large-sample procedure relies on the property that the distribution of the t -like quantities is close to the standard normal in large samples. In this paper, we use a new procedure based on both simulation and asymptotic theory to determine the sample size for a test plan. Unlike the complete data case, the t -like quantities are not pivotal quantities in general when data are time censored. However we show that the distribution of the t -like quantities only depend on the expected proportion failing and obtain the distributions by simulation for both complete and time censoring case when data follow Weibull distribution. We find that the large-sample procedure usually underestimates the sample size even when it is said to be 200 or more. The sample size given by the proposed procedure insures the requested nominal accuracy and confidence of the estimation when the test plan results in complete or time censored data. Some useful figures displaying the required sample size for the new procedure are also presented.  相似文献   

10.
Recently developed two-stage estimation methods of sample selection models are used, in the context of data from the 1989 Labor Market Activity Survey, to examine labor supply decisions and wage outcomes for employed men and women. Recent hypothesis test procedures are used to test for no sample selection and to test for a parametric against a semiparametric selection-correction procedure. We conclude that selection is indeed an issue for the sample at hand and that the semiparametric specification is appropriate. We also present the standard decomposition of the gender wage gap into its explained and unexplained portions.  相似文献   

11.
When counting the number of chemical parts in air pollution studies or when comparing the occurrence of congenital malformations between a uranium mining town and a control population, we often assume Poisson distribution for the number of these rare events. Some discussions on sample size calculation under Poisson model appear elsewhere, but all these focus on the case of testing equality rather than testing equivalence. We discuss sample size and power calculation on the basis of exact distribution under Poisson models for testing non-inferiority and equivalence with respect to the mean incidence rate ratio. On the basis of large sample theory, we further develop an approximate sample size calculation formula using the normal approximation of a proposed test statistic for testing non-inferiority and an approximate power calculation formula for testing equivalence. We find that using these approximation formulae tends to produce an underestimate of the minimum required sample size calculated from using the exact test procedure. On the other hand, we find that the power corresponding to the approximate sample sizes can be actually accurate (with respect to Type I error and power) when we apply the asymptotic test procedure based on the normal distribution. We tabulate in a variety of situations the minimum mean incidence needed in the standard (or the control) population, that can easily be employed to calculate the minimum required sample size from each comparison group for testing non-inferiority and equivalence between two Poisson populations.  相似文献   

12.
For testing the non-inferiority (or equivalence) of an experimental treatment to a standard treatment, the odds ratio (OR) of patient response rates has been recommended to measure the relative treatment efficacy. On the basis of an exact test procedure proposed elsewhere for a simple crossover design, we develop an exact sample-size calculation procedure with respect to the OR of patient response rates for a desired power of detecting non-inferiority at a given nominal type I error. We note that the sample size calculated for a desired power based on an asymptotic test procedure can be much smaller than that based on the exact test procedure under a given situation. We further discuss the advantage and disadvantage of sample-size calculation using the exact test and the asymptotic test procedures. We employ an example by studying two inhalation devices for asthmatics to illustrate the use of sample-size calculation procedure developed here.  相似文献   

13.
Summary. We propose a kernel estimator of integrated squared density derivatives, from a sample that has been contaminated by random noise. We derive asymptotic expressions for the bias and the variance of the estimator and show that the squared bias term dominates the variance term. This coincides with results that are available for non-contaminated observations. We then discuss the selection of the bandwidth parameter when estimating integrated squared density derivatives based on contaminated data. We propose a data-driven bandwidth selection procedure of the plug-in type and investigate its finite sample performance via a simulation study.  相似文献   

14.
In the present paper we introduce a partially sequential sampling procedure to develop a nonparametric method for simultaneous testing. Our work, as in [U. Bandyopadhyay, A. Mukherjee, B. Purkait, Nonparametric partial sequential tests for patterned alternatives in multi-sample problems, Sequential Analysis 26 (4) (2007) 443–466], is motivated by an interesting investigation related to arsenic contamination in ground water. Here we incorporate the idea of multiple hypotheses testing as in [Y. Benjamini, T. Hochberg, Controlling the false discovery rate: A practical and powerful approach to multiple testing, Journal of Royal Statistical Society B 85 (1995) 289–300] in a typical way. We present some Monte Carlo studies related to the proposed procedure. We observe that the proposed sampling design minimizes the expected sample sizes in different situations. The procedure as a whole effectively describes the testing under dual pattern alternatives. We indicate in brief some large sample situations. We also present detailed analysis of a geological field survey data.  相似文献   

15.
In this paper we derive two likelihood-based procedures for the construction of confidence limits for the common odds ratio in K 2 × 2 contingency tables. We then conduct a simulation study to compare these procedures with a recently proposed procedure by Sato (Biometrics 46 (1990) 71–79), based on the asymptotic distribution of the Mantel-Haenszel estimate of the common odds ratio. We consider the situation in which the number of strata remains fixed (finite), but the sample sizes within each stratum are large. Bartlett's score procedure based on the conditional likelihood is found to be almost identical, in terms of coverage probabilities and average coverage lengths, to the procedure recommended by Sato, although the score procedure has some edge, in some instances, in terms of average coverage lengths. So, for ‘fixed strata and large sample’ situation Bartlett's score procedure can be considered as an alternative to the procedure proposed by Sato, based on the asymptotic distribution of the Mantel-Haenszel estimator of the common odds ratio.  相似文献   

16.
We consider hypothesis testing problems for low‐dimensional coefficients in a high dimensional additive hazard model. A variance reduced partial profiling estimator (VRPPE) is proposed and its asymptotic normality is established, which enables us to test the significance of each single coefficient when the data dimension is much larger than the sample size. Based on the p‐values obtained from the proposed test statistics, we then apply a multiple testing procedure to identify significant coefficients and show that the false discovery rate can be controlled at the desired level. The proposed method is also extended to testing a low‐dimensional sub‐vector of coefficients. The finite sample performance of the proposed testing procedure is evaluated by simulation studies. We also apply it to two real data sets, with one focusing on testing low‐dimensional coefficients and the other focusing on identifying significant coefficients through the proposed multiple testing procedure.  相似文献   

17.
This paper studies the covariance structure and the asymptotic properties of Yule–Walker (YW) type estimators for a bilinear time series model with periodically time-varying coefficients. We give necessary and sufficient conditions ensuring the existence of moments up to eighth order. Expressions of second and third order joint moments, as well as the limiting covariance matrix of the sample moments are given. Strong consistency and asymptotic normality of the YW estimator as well as hypotheses testing via Wald’s procedure are derived. We use a residual bootstrap version to construct bootstrap estimators of the YW estimates. Some simulation results will demonstrate the large sample behavior of the bootstrap procedure.  相似文献   

18.
Despite tremendous effort on different designs with cross-sectional data, little research has been conducted for sample size calculation and power analyses under repeated measures design. In addition to time-averaged difference, changes in mean response over time (CIMROT) is the primary interest in repeated measures analysis. We generalized sample size calculation and power analysis equations for CIMROT to allow unequal sample size between groups for both continuous and binary measures, through simulation, evaluated the performance of proposed methods, and compared our approach to that of a two-stage model formulization. We also created a software procedure to implement the proposed methods.  相似文献   

19.
Most feature screening methods for ultrahigh-dimensional classification explicitly or implicitly assume the covariates are continuous. However, in the practice, it is quite common that both categorical and continuous covariates appear in the data, and applicable feature screening method is very limited. To handle this non-trivial situation, we propose an entropy-based feature screening method, which is model free and provides a unified screening procedure for both categorical and continuous covariates. We establish the sure screening and ranking consistency properties of the proposed procedure. We investigate the finite sample performance of the proposed procedure by simulation studies and illustrate the method by a real data analysis.  相似文献   

20.
Summary.  We use the forward search to provide robust Mahalanobis distances to detect the presence of outliers in a sample of multivariate normal data. Theoretical results on order statistics and on estimation in truncated samples provide the distribution of our test statistic. We also introduce several new robust distances with associated distributional results. Comparisons of our procedure with tests using other robust Mahalanobis distances show the good size and high power of our procedure. We also provide a unification of results on correction factors for estimation from truncated samples.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号