首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
A two-sample partially sequential probability ratio test (PSPRT) is considered for the two-sample location problem with one sample fixed and the other sequential. Observations are assumed to come from two normal poptilatlons with equal and known variances. Asymptotically in the fixed-sample size the PSPRT is a truncated Wald one sample sequential probability test. Brownian motion approximations for boundary-crossing probabilities and expected sequential sample size are obtained. These calculations are compared to values obtained by Monte Carlo simulation.  相似文献   

2.
A crucial component in the statistical simulation of a computationally expensive model is a good design of experiments. In this paper we compare the efficiency of the columnwise–pairwise (CP) and genetic algorithms for the optimization of Latin hypercubes (LH) for the purpose of sampling in statistical investigations. The performed experiments indicate, among other results, that CP methods are most efficient for small and medium size LH, while an adopted genetic algorithm performs better for large LH.Two optimality criteria suggested in the literature are evaluated with respect to statistical properties and efficiency. The obtained results lead us to favor a criterion based on the physical analogy of minimization of forces between charged particles suggested in Audze and Eglais (1977. Problems Dyn. Strength 35, 104–107) over a ‘maximin distance’ criterion from Johnson et al. (1990. J. Statist. Plann. Inference 26, 131–148).  相似文献   

3.
The confidence interval of the Kaplan–Meier estimate of the survival probability at a fixed time point is often constructed by the Greenwood formula. This normal approximation-based method can be looked as a Wald type confidence interval for a binomial proportion, the survival probability, using the “effective” sample size defined by Cutler and Ederer. Wald-type binomial confidence interval has been shown to perform poorly comparing to other methods. We choose three methods of binomial confidence intervals for the construction of confidence interval for survival probability: Wilson's method, Agresti–Coull's method, and higher-order asymptotic likelihood method. The methods of “effective” sample size proposed by Peto et al. and Dorey and Korn are also considered. The Greenwood formula is far from satisfactory, while confidence intervals based on the three methods of binomial proportion using Cutler and Ederer's “effective” sample size have much better performance.  相似文献   

4.
Linear mixed‐effects models are a powerful tool for modelling longitudinal data and are widely used in practice. For a given set of covariates in a linear mixed‐effects model, selecting the covariance structure of random effects is an important problem. In this paper, we develop a joint likelihood‐based selection criterion. Our criterion is the approximately unbiased estimator of the expected Kullback–Leibler information. This criterion is also asymptotically optimal in the sense that for large samples, estimates based on the covariance matrix selected by the criterion minimize the approximate Kullback–Leibler information. Finite sample performance of the proposed method is assessed by simulation experiments. As an illustration, the criterion is applied to a data set from an AIDS clinical trial.  相似文献   

5.
For interval estimation of a proportion, coverage probabilities tend to be too large for “exact” confidence intervals based on inverting the binomial test and too small for the interval based on inverting the Wald large-sample normal test (i.e., sample proportion ± z-score × estimated standard error). Wilson's suggestion of inverting the related score test with null rather than estimated standard error yields coverage probabilities close to nominal confidence levels, even for very small sample sizes. The 95% score interval has similar behavior as the adjusted Wald interval obtained after adding two “successes” and two “failures” to the sample. In elementary courses, with the score and adjusted Wald methods it is unnecessary to provide students with awkward sample size guidelines.  相似文献   

6.
In this paper, the Gompertz model is extended to incorporate time-dependent covariates in the presence of interval-, right-, left-censored and uncensored data. Then, its performance at different sample sizes, study periods and attendance probabilities are studied. Following that, the model is compared to a fixed covariate model. Finally, two confidence interval estimation methods, Wald and likelihood ratio (LR), are explored and conclusions are drawn based on the results of the coverage probability study. The results indicate that bias, standard error and root mean square error values of the parameter estimates decrease with the increase in study period, attendance probability and sample size. Also, LR was found to work slightly better than the Wald for parameters of the model.  相似文献   

7.
ABSTRACT

Despite the popularity of the general linear mixed model for data analysis, power and sample size methods and software are not generally available for commonly used test statistics and reference distributions. Statisticians resort to simulations with homegrown and uncertified programs or rough approximations which are misaligned with the data analysis. For a wide range of designs with longitudinal and clustering features, we provide accurate power and sample size approximations for inference about fixed effects in the linear models we call reversible. We show that under widely applicable conditions, the general linear mixed-model Wald test has noncentral distributions equivalent to well-studied multivariate tests. In turn, exact and approximate power and sample size results for the multivariate Hotelling–Lawley test provide exact and approximate power and sample size results for the mixed-model Wald test. The calculations are easily computed with a free, open-source product that requires only a web browser to use. Commercial software can be used for a smaller range of reversible models. Simple approximations allow accounting for modest amounts of missing data. A real-world example illustrates the methods. Sample size results are presented for a multicenter study on pregnancy. The proposed study, an extension of a funded project, has clustering within clinic. Exchangeability among the participants allows averaging across them to remove the clustering structure. The resulting simplified design is a single-level longitudinal study. Multivariate methods for power provide an approximate sample size. All proofs and inputs for the example are in the supplementary materials (available online).  相似文献   

8.
In this article we examine sample size calculations for a binomial proportion based on the confidence interval width of the Agresti–Coull, Wald and Wilson Score intervals. We pointed out that the commonly used methods based on known and fixed standard errors cannot guarantee the desired confidence interval width given a hypothesized proportion. Therefore, a new adjusted sample size calculation method was introduced, which is based on the conditional expectation of the width of the confidence interval given the hypothesized proportion. With the reduced sample size, the coverage probability can still maintain at the nominal level and is very competitive to the converge probability for the original sample size.  相似文献   

9.
In this article, optimal progressive censoring schemes are examined for the nonparametric confidence intervals of population quantiles. The results obtained can be universally applied to any continuous probability distribution. By using the interval mass as an optimality criterion, the optimization process is free of the actual observed values from the sample and needs only the initial sample size n and the number of complete failures m. Using several sample sizes combined with various degrees of censoring, the results of the optimization are presented here for the population median at selected levels of confidence (99, 95, and 90%). With the optimality criterion under consideration, the efficiencies of the worst progressive Type-II censoring scheme and ordinary Type-II censoring scheme are also examined in comparison to the best censoring scheme obtained for fixed n and m.  相似文献   

10.
The main focus of our paper is to compare the performance of different model selection criteria used for multivariate reduced rank time series. We consider one of the most commonly used reduced rank model, that is, the reduced rank vector autoregression (RRVAR (p, r)) introduced by Velu et al. [Reduced rank models for multiple time series. Biometrika. 1986;7(31):105–118]. In our study, the most popular model selection criteria are included. The criteria are divided into two groups, that is, simultaneous selection and two-step selection criteria, accordingly. Methods from the former group select both an autoregressive order p and a rank r simultaneously, while in the case of two-step criteria, first an optimal order p is chosen (using model selection criteria intended for the unrestricted VAR model) and then an optimal rank r of coefficient matrices is selected (e.g. by means of sequential testing). Considered model selection criteria include well-known information criteria (such as Akaike information criterion, Schwarz criterion, Hannan–Quinn criterion, etc.) as well as widely used sequential tests (e.g. the Bartlett test) and the bootstrap method. An extensive simulation study is carried out in order to investigate the efficiency of all model selection criteria included in our study. The analysis takes into account 34 methods, including 6 simultaneous methods and 28 two-step approaches, accordingly. In order to carefully analyse how different factors affect performance of model selection criteria, we consider over 150 simulation settings. In particular, we investigate the influence of the following factors: time series dimension, different covariance structure, different level of correlation among components and different level of noise (variance). Moreover, we analyse the prediction accuracy concerned with the application of the RRVAR model and compare it with results obtained for the unrestricted vector autoregression. In this paper, we also present a real data application of model selection criteria for the RRVAR model using the Polish macroeconomic time series data observed in the period 1997–2007.  相似文献   

11.
In this paper, the expected total costs (ETCs) of three kinds of quality cost functions for the two-sided sequential screening procedure (SQSP) based on the individual misclassification error are obtained, where the ETC is the sum of the expected cost of inspection, the expected cost of rejection and the expected cost of quality. The general formulas for all the desired probabilities and three ETCs when k screening variables are allocated into r-stages are derived. The optimal allocation combination for each ETC is determined based on the criterion of minimum ETC. Finally, we give two examples to illustrate the selection of the optimal allocation combination for the SQSP.  相似文献   

12.
In this paper, asymptotic relative efficiency (ARE) of Wald tests for the Tweedie class of models with log-linear mean, is considered when the aux¬iliary variable is measured with error. Wald test statistics based on the naive maximum likelihood estimator and on a consistent estimator which is obtained by using Nakarnura's (1990) corrected score function approach are defined. As shown analytically, the Wald statistics based on the naive and corrected score function estimators are asymptotically equivalents in terms of ARE. On the other hand, the asymptotic relative efficiency of the naive and corrected Wald statistic with respect to the Wald statistic based on the true covariate equals to the square of the correlation between the unobserved and the observed co-variate. A small scale numerical Monte Carlo study and an example illustrate the small sample size situation.  相似文献   

13.
The D‐optimal minimax criterion is proposed to construct fractional factorial designs. The resulting designs are very efficient, and robust against misspecification of the effects in the linear model. The criterion was first proposed by Wilmut & Zhou (2011); their work is limited to two‐level factorial designs, however. In this paper we extend this criterion to designs with factors having any levels (including mixed levels) and explore several important properties of this criterion. Theoretical results are obtained for construction of fractional factorial designs in general. This minimax criterion is not only scale invariant, but also invariant under level permutations. Moreover, it can be applied to any run size. This is an advantage over some other existing criteria. The Canadian Journal of Statistics 41: 325–340; 2013 © 2013 Statistical Society of Canada  相似文献   

14.
The purpose of this article is to investigate hypothesis testing in functional comparative calibration models. Wald type statistics are considered which are asymptotically distributed according to the chi-square distribution. The statistics are based on maximum likelihood, corrected score approach, and method of moment estimators of the model parameters, which are shown to be consistent and asymptotically normally distributed. Results of analytical and simulation studies seem to indicate that the Wald statistics based on the method of moment estimators and the corrected score estimators are, as expected, less efficient than the Wald type statistic based on the maximum likelihood estimators for small n. Wald statistic based on moment estimators are simpler to compute than the other Wald statistics tests and their performance improves significantly as n increases. Comparisons with an alternative F statistics proposed in the literature are also reported.  相似文献   

15.
In this article, the expected total costs of three kinds of quality cost functions for the one-sided sequential screening procedure based on the individual misclassification error are obtained, where the expected total cost is the sum of the expected cost of inspection, the expected cost of rejection, and the expected cost of quality. The computational formulas for three kinds of expected total costs are derived when k screening variables are allocated into r stages. The optimal allocation combination is determined based on the criterion of minimum expected total cost. At last, we give one example to illustrate the selection of the optimal allocation combination for the sequential screening procedure.  相似文献   

16.
A method is presented for the sequential analysis of experiments involving two treatments to which response is dichotomous. Composite hypotheses about the difference in success probabilities are tested, and covariate information is utilized in the analysis. The method is based upon a generalization of Bartlett’s (1946) procedure for using the maximum likelihood estimate of a nuisance parameter in a Sequential Probability Ratio Test (SPRT). Treatment assignment rules studied include pure randomization, randomized blocks, and an adaptive rule which tends to assign the superior treatment to the majority of subjects. It is shown that the use of covariate information can result in important reductions in the expected sample size for specified error probabilities, and that the use of covariate information is essential for the elimination of bias when adaptive assignment rules are employed. Designs of the type presented are easily generated, as the termination criterion is the same as for a Wald SPRT of simple hypotheses.  相似文献   

17.
The most common asymptotic procedure for analyzing a 2 × 2 table (under the conditioning principle) is the ‰ chi-squared test with correction for continuity (c.f.c). According to the way this is applied, up to the present four methods have been obtained: one for one-tailed tests (Yates') and three for two-tailed tests (those of Mantel, Conover and Haber). In this paper two further methods are defined (one for each case), the 6 resulting methods are grouped in families, their individual behaviour studied and the optimal is selected. The conclusions are established on the assumption that the method studied is applied indiscriminately (without being subjected to validity conditions), and taking a basis of 400,000 tables (with the values of sample size n between 20 and 300 and exact P-values between 1% and 10%) and a criterion of evaluation based on the percentage of times in which the approximate P-value differs from the exact (Fisher's exact test) by an excessive amount. The optimal c.f.c. depends on n, on E (the minimum quantity expected) and on the error α to be used, but the rule of selection is not complicated and the new methods proposed are frequently selected. In the paper we also study what occurs when E ≥ 5, as well as whether the chi-squared by factor (n-1).  相似文献   

18.
We restrict attention to a class of Bernoulli subset selection procedures which take observations one-at-a-time and can be compared directly to the Gupta-Sobel single-stage procedure. For the criterion of minimizing the expected total number of observations required to terminate experimentation, we show that optimal sampling rules within this class are not of practical interest. We thus turn to procedures which, although not optimal, exhibit desirable behavior with regard to this criterion. A procedure which employs a modification of the so-called least-failures sampling rule is proposed, and is shown to possess many desirable properties among a restricted class of Bernoulli subset selection procedures. Within this class, it is optimal for minimizing the number of observations taken from populations excluded from consideration following a subset selection experiment, and asymptotically optimal for minimizing the expected total number of observations required. In addition, it can result in substantial savings in the expected total num¬ber of observations required as compared to a single-stage procedure, thus it may be de¬sirable to a practitioner if sampling is costly or the sample size is limited.  相似文献   

19.
In this article, we consider the Wald test statistic for testing equality between the sets of regression coefficients in two linear regression models when the disturbance variances may possibly be unequal. This test can be also used as a test for a structural break. However, it is well known that the test based on the Wald test statistic suffers from severe size distortion in small sample when the disturbance variances of the two regression models are unequal. Our simulation results show that substantial improvements are made when the bootstrap methods are applied.  相似文献   

20.
In this paper we consider inference of parameters in time series regression models. In the traditional inference approach, the heteroskedasticity and autocorrelation consistent (HAC) estimation is often involved to consistently estimate the asymptotic covariance matrix of regression parameter estimator. Since the bandwidth parameter in the HAC estimation is difficult to choose in practice, there has been a recent surge of interest in developing bandwidth-free inference methods. However, existing simulation studies show that these new methods suffer from severe size distortion in the presence of strong temporal dependence for a medium sample size. To remedy the problem, we propose to apply the prewhitening to the inconsistent long-run variance estimator in these methods to reduce the size distortion. The asymptotic distribution of the prewhitened Wald statistic is obtained and the general effectiveness of prewhitening is shown through simulations.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号