首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The Victorian state government implemented the zero blood alcohol content legislation on 22 May 1984 to reduce the number of road accidents for novice drivers and riders. Under this legislation no learner, first year probationary, disqualified or unlicensed driver or rider can drive or ride with any alcohol in his/her blood. The serious casualty accidents during the alcohol times were used in this study as surrogates for alcohol-involved accidents. A preliminary evaluation of the legislation was made only for car drivers involved in serious casualty accidents at alcohol times, using the intervention time series analysis. The effect was measured over the eighteen month post-legislation period: July 1984 to December 1985.The analysis indicated that there was no effect of the legislation for target group drivers at alcohol times as compared with the increase which would have resulted in the absence of the zero BAC legislation in Victoria. Power analysis of the study has been carried out and it showed that the power of the analysis was very poor.  相似文献   

2.
Under non-normality, this article is concerned with testing diagonality of high-dimensional covariance matrix, which is more practical than testing sphericity and identity in high-dimensional setting. The existing testing procedure for diagonality is not robust against either the data dimension or the data distribution, producing tests with distorted type I error rates much larger than nominal levels. This is mainly due to bias from estimating some functions of high-dimensional covariance matrix under non-normality. Compared to the sphericity and identity hypotheses, the asymptotic property of the diagonality hypothesis would be more involved and we should be more careful to deal with bias. We develop a correction that makes the existing test statistic robust against both the data dimension and the data distribution. We show that the proposed test statistic is asymptotically normal without the normality assumption and without specifying an explicit relationship between the dimension p and the sample size n. Simulations show that it has good size and power for a wide range of settings.  相似文献   

3.
Stochastic ordering between probability distributions has been widely studied in the past 50 years. Because it is often easy to make valuable judgments when such orderings exist, it is desirable to recognize their existence and to model distributional structures under them. Likelihood ratio test is the most commonly used method to test hypotheses involving stochastic orderings. Among the various formally defined notions of stochastic ordering, the least stringent is simple stochastic ordering. In this paper, we consider testing the hypothesis that all multinomial populations are identically distributed against the alternative that they are in simple stochastic ordering. We construct likelihood ratio test statistic for this hypothesis test problem, provide limit form of the objective function corresponding to the test statistic and show that the test statistic is asymptotically distributed as a mixture of chi-squared distributions, i.e., a chi-bar-squared distribution.  相似文献   

4.
Non‐parametric generalized likelihood ratio test is a popular method of model checking for regressions. However, there are two issues that may be the barriers for its powerfulness: existing bias term and curse of dimensionality. The purpose of this paper is thus twofold: a bias reduction is suggested and a dimension reduction‐based adaptive‐to‐model enhancement is recommended to promote the power performance. The proposed test statistic still possesses the Wilks phenomenon and behaves like a test with only one covariate. Thus, it converges to its limit at a much faster rate and is much more sensitive to alternative models than the classical non‐parametric generalized likelihood ratio test. As a by‐product, we also prove that the bias‐corrected test is more efficient than the one without bias reduction in the sense that its asymptotic variance is smaller. Simulation studies and a real data analysis are conducted to evaluate of proposed tests.  相似文献   

5.
A general method for correcting the bias of the maximum likelihood estimator (MLE) of the common shape parameter of Weibull populations, allowing a general right censorship, is proposed in this paper. Extensive simulation results show that the new method is very effective in correcting the bias of the MLE, regardless of censoring mechanism, sample size, censoring proportion and number of populations involved. The method can be extended to more complicated Weibull models.  相似文献   

6.
A study on the robustness of the adaptation of the sample size for a phase III trial on the basis of existing phase II data is presented—when phase III is lower than phase II effect size. A criterion of clinical relevance for phase II results is applied in order to launch phase III, where data from phase II cannot be included in statistical analysis. The adaptation consists in adopting the conservative approach to sample size estimation, which takes into account the variability of phase II data. Some conservative sample size estimation strategies, Bayesian and frequentist, are compared with the calibrated optimal γ conservative strategy (viz. COS) which is the best performer when phase II and phase III effect sizes are equal. The Overall Power (OP) of these strategies and the mean square error (MSE) of their sample size estimators are computed under different scenarios, in the presence of the structural bias due to lower phase III effect size, for evaluating the robustness of the strategies. When the structural bias is quite small (i.e., the ratio of phase III to phase II effect size is greater than 0.8), and when some operating conditions for applying sample size estimation hold, COS can still provide acceptable results for planning phase III trials, even if in bias absence the OP was higher.

Main results concern the introduction of a correction, which affects just sample size estimates and not launch probabilities, for balancing the structural bias. In particular, the correction is based on a postulation of the structural bias; hence, it is more intuitive and easier to use than those based on the modification of Type I or/and Type II errors. A comparison of corrected conservative sample size estimation strategies is performed in the presence of a quite small bias. When the postulated correction is right, COS provides good OP and the lowest MSE. Moreover, the OPs of COS are even higher than those observed without bias, thanks to higher launch probability and a similar estimation performance. The structural bias can therefore be exploited for improving sample size estimation performances. When the postulated correction is smaller than necessary, COS is still the best performer, and it also works well. A higher than necessary correction should be avoided.  相似文献   

7.
Summary.  We address an apparent gap in the applied terrorism literature providing an estimate for the size of under-reporting in transnational terrorist activity. The method that is suggested is computationally straightforward and takes into account the stochastic interactions between terrorism, polity and press freedom in a manner that utilizes their sample properties. The outcome from the application of this metric is that underreporting bias is indeed present and quite sizable.  相似文献   

8.
A particular concerns of researchers in statistical inference is bias in parameters estimation. Maximum likelihood estimators are often biased and for small sample size, the first order bias of them can be large and so it may influence the efficiency of the estimator. There are different methods for reduction of this bias. In this paper, we proposed a modified maximum likelihood estimator for the shape parameter of two popular skew distributions, namely skew-normal and skew-t, by offering a new method. We show that this estimator has lower asymptotic bias than the maximum likelihood estimator and is more efficient than those based on the existing methods.  相似文献   

9.
We propose a new type of stochastic ordering which imposes a monotone tendency in differences between one multinomial probability and a known standard one. An estimation procedure is proposed for the constrained maximum likelihood estimate, and then the asymptotic null distribution is derived for the likelihood ratio test statistic for testing equality of two multinomial distributions against the new stochastic ordering. An alternative test is also discussed based on Neyman modified minimum chi-square estimator. These tests are illustrated with a set of heart disease data.  相似文献   

10.
Testing for stochastic ordering is of considerable importance when increasing does of a treatment are being compared, but in applications involving multivariate responses has received much less attention. We propose a permutation test for testing against multivariate stochastic ordering. This test is distribution-free and no assumption is made about the dependence relations among variables. A comparative simulation study shows that the proposed solution exhibits a good overall performance when compared with existing tests that can be used for the same problem.  相似文献   

11.
The different average and marginal consumption propensities estimated from time series data constitute a classic puzzle of the theory of consumption. This article argues that if consumption and income possess a common stochastic trend (and thus are cointegrated), both the average propensity to consume (APC) and the marginal propensity to consume (MPC) will be consistent but biased in small samples. Upon correcting for this small sample bias, the puzzling discrepancies between the APC and the MPC estimated using annual data for the United States from 1897 to 1949 become substantially smaller. This supports an alternative resolution of the puzzle based on the theory of cointegration.  相似文献   

12.
There is a wide variety of stochastic ordering problems where K groups (typically ordered with respect to time) are observed along with a (continuous) response. The interest of the study may be on finding the change-point group, i.e. the group where an inversion of trend of the variable under study is observed. A change point is not merely a maximum (or a minimum) of the time-series function, but a further requirement is that the trend of the time-series is monotonically increasing before that point, and monotonically decreasing afterwards. A suitable solution can be provided within a conditional approach, i.e. by considering some suitable nonparametric combination of dependent tests for simple stochastic ordering problems. The proposed procedure is very flexible and can be extended to trend and/or repeated measure problems. Some comparisons through simulations and examples with the well known Mack & Wolfe test for umbrella alternative and with Page’s test for trend problems with correlated data are investigated.  相似文献   

13.
当前,对中国居民肉类需求的价格弹性估计多是利用汇总数据和用单位价值替代市场价格进行估计,而产品质量变化会导致用单位价值估计的价格弹性有偏。鉴此,利用全国984个样本汇总数据,估计城镇居民对猪牛羊禽肉的各种弹性,并且对利用单位价值估计的单位价值弹性和真实的价格弹性之间的偏差进行估计。结论显示:通过单位价值估计的价格弹性(单位价值弹性)高估了真实的价格弹性;随着收入水平的提高城镇居民对肉类质量水平需求会不断提高,通过单位价值高估的价格弹性程度将更强,因此对单位价值弹性偏差的衡量越发重要。  相似文献   

14.
A statistical test concerning the comparison of two agreements for dependent observations is studied. The concept of stochastic ordering plays an important role in defining order of agreements. A one-sided likelihood ratio test for equality of two agreements is proposed. This test is closely related to the test of marginal homogeneity against marginal stochastic ordering. A real example is analyzed for illustration purposes.  相似文献   

15.
ApEn, approximate entropy, is a recently developed family of parameters and statistics quantifying regularity (complexity) in data, providing an information-theoretic quantity for continuous-state processes. We provide the motivation for ApEn development, and indicate the superiority of ApEn to the K-S entropy for statistical application, and for discrimination of both correlated stochastic and noisy deterministic processes. We study the variation of ApEn with input parameter choices, reemphasizing that ApEn is a relative measure of regularity. We study the bias in the ApEn statistic, and present evidence for asymptotic normality in the ApEn distributions, assuming weak dependence. We provide a new test for the hypothesis that an underlying time-series is generated by i.i.d. variables, which does not require distribution specification. We introduce randomized ApEn, which derives an empirical significance probability that two processes differ, based on one data set from each process.  相似文献   

16.
In many applications researchers collect multivariate binary response data under two or more, naturally ordered, experimental conditions. In such situations one is often interested in using all binary outcomes simultaneously to detect an ordering among the experimental conditions. To make such comparisons we develop a general methodology for testing for the multivariate stochastic order between K ≥ 2 multivariate binary distributions. The proposed test uses order restricted estimators which, according to our simulation study, are more efficient than the unrestricted estimators in terms of mean squared error. The power of the proposed test was compared with several alternative tests. These included procedures which combine individual univariate tests for order, such as union intersection tests and a Bonferroni based test. We also compared the proposed test with unrestricted Hotelling's T(2) type test. Our simulations suggest that the proposed method competes well with these alternatives. The gain in power is often substantial. The proposed methodology is illustrated by applying it to a two-year rodent cancer bioassay data obtained from the US National Toxicology Program (NTP). Supplemental materials are available online.  相似文献   

17.
This paper characterizes the finite-sample bias of the maximum likelihood estimator (MLE) in a reduced rank vector autoregression and suggests two simulation-based bias corrections. One is a simple bootstrap implementation that approximates the bias at the MLE. The other is an iterative root-finding algorithm implemented using stochastic approximation methods. Both algorithms are shown to be improvements over the MLE, measured in terms of mean square error and mean absolute deviation. An illustration to US macroeconomic time series is given.  相似文献   

18.
Supremum score test statistics are often used to evaluate hypotheses with unidentifiable nuisance parameters under the null hypothesis. Although these statistics provide an attractive framework to address non‐identifiability under the null hypothesis, little attention has been paid to their distributional properties in small to moderate sample size settings. In situations where there are identifiable nuisance parameters under the null hypothesis, these statistics may behave erratically in realistic samples as a result of a non‐negligible bias induced by substituting these nuisance parameters by their estimates under the null hypothesis. In this paper, we propose an adjustment to the supremum score statistics by subtracting the expected bias from the score processes and show that this adjustment does not alter the limiting null distribution of the supremum score statistics. Using a simple example from the class of zero‐inflated regression models for count data, we show empirically and theoretically that the adjusted tests are superior in terms of size and power. The practical utility of this methodology is illustrated using count data in HIV research.  相似文献   

19.
Generally, the semiclosed-form option pricing formula for complex financial models depends on unobservable factors such as stochastic volatility and jump intensity. A popular practice is to use an estimate of these latent factors to compute the option price. However, in many situations this plug-and-play approximation does not yield the appropriate price. This article examines this bias and quantifies its impacts. We decompose the bias into terms that are related to the bias on the unobservable factors and to the precision of their point estimators. The approximated price is found to be highly biased when only the history of the stock price is used to recover the latent states. This bias is corrected when option prices are added to the sample used to recover the states' best estimate. We also show numerically that such a bias is propagated on calibrated parameters, leading to erroneous values. The Canadian Journal of Statistics 48: 8–35; 2020 © 2019 Statistical Society of Canada  相似文献   

20.
In longitudinal data where the timing and frequency of the measurement of outcomes may be associated with the value of the outcome, significant bias can occur. Previous results depended on correct specification of the outcome process and a somewhat unrealistic visit process model. In practice, this will never exactly be the case, so it is important to understand to what degree the results hold when those assumptions are violated in order to guide practical use of the methods. This paper presents theory and the results of simulation studies to extend our previous work to more realistic visit process models, as well as Poisson outcomes. We also assess the effects of several types of model misspecification. The estimated bias in these new settings generally mirrors the theoretical and simulation results of our previous work and provides confidence in using maximum likelihood methods in practice. Even when the assumptions about the outcome process did not hold, mixed effects models fit by maximum likelihood produced at most small bias in estimated regression coefficients, illustrating the robustness of these methods. This contrasts with generalised estimating equations approaches where bias increased in the settings of this paper. The analysis of data from a study of change in neurological outcomes following microsurgery for a brain arteriovenous malformation further illustrate the results.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号