首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
2.
For interval estimation of a proportion, coverage probabilities tend to be too large for “exact” confidence intervals based on inverting the binomial test and too small for the interval based on inverting the Wald large-sample normal test (i.e., sample proportion ± z-score × estimated standard error). Wilson's suggestion of inverting the related score test with null rather than estimated standard error yields coverage probabilities close to nominal confidence levels, even for very small sample sizes. The 95% score interval has similar behavior as the adjusted Wald interval obtained after adding two “successes” and two “failures” to the sample. In elementary courses, with the score and adjusted Wald methods it is unnecessary to provide students with awkward sample size guidelines.  相似文献   

3.
The authors state new general results for computing Blaker’s exact confidence interval limits for usual one-parameter discrete distributions. Specific results for implementing an accurate and fast algorithm are made explicit for the binomial, negative binomial, Poisson and hypergeometric model.  相似文献   

4.
The problem of interval estimation of the stress–strength reliability involving two independent Weibull distributions is considered. An interval estimation procedure based on the generalized variable (GV) approach is given when the shape parameters are unknown and arbitrary. The coverage probabilities of the GV approach are evaluated by Monte Carlo simulation. Simulation studies show that the proposed generalized variable approach is very satisfactory even for small samples. For the case of equal shape parameter, it is shown that the generalized confidence limits are exact. Some available asymptotic methods for the case of equal shape parameter are described and their coverage probabilities are evaluated using Monte Carlo simulation. Simulation studies indicate that no asymptotic approach based on the likelihood method is satisfactory even for large samples. Applicability of the GV approach for censored samples is also discussed. The results are illustrated using an example.  相似文献   

5.
ABSTRACT

ARMA–GARCH models are widely used to model the conditional mean and conditional variance dynamics of returns on risky assets. Empirical results suggest heavy-tailed innovations with positive extreme value index for these models. Hence, one may use extreme value theory to estimate extreme quantiles of residuals. Using weak convergence of the weighted sequential tail empirical process of the residuals, we derive the limiting distribution of extreme conditional Value-at-Risk (CVaR) and conditional expected shortfall (CES) estimates for a wide range of extreme value index estimators. To construct confidence intervals, we propose to use self-normalization. This leads to improved coverage vis-à-vis the normal approximation, while delivering slightly wider confidence intervals. A data-driven choice of the number of upper order statistics in the estimation is suggested and shown to work well in simulations. An application to stock index returns documents the improvements of CVaR and CES forecasts.  相似文献   

6.
The good performance of logit confidence intervals for the odds ratio with small samples is well known. This is true unless the actual odds ratio is very large. In single capture–recapture estimation the odds ratio is equal to 1 because of the assumption of independence of the samples. Consequently, a transformation of the logit confidence intervals for the odds ratio is proposed in order to estimate the size of a closed population under single capture–recapture estimation. It is found that the transformed logit interval, after adding .5 to each observed count before computation, has actual coverage probabilities near to the nominal level even for small populations and even for capture probabilities near to 0 or 1, which is not guaranteed for the other capture–recapture confidence intervals proposed in statistical literature. Thus, given that the .5 transformed logit interval is very simple to compute and has a good performance, it is appropriate to be implemented by most users of the single capture–recapture method.  相似文献   

7.
We consider interval-valued time series, that is, series resulting from collecting real intervals as an ordered sequence through time. Since the lower and upper bounds of the observed intervals at each time point are in fact values of the same variable, they are naturally related. We propose modeling interval time series with space–time autoregressive models and, based on the process appropriate for the interval bounds, we derive the model for the intervals’ center and radius. A simulation study and an application with data of daily wind speed at different meteorological stations in Ireland illustrate that the proposed approach is appropriate and useful.  相似文献   

8.
An explicit decomposition on asymptotically independent distributed as chi-squared with one degree of freedom components of the Pearson–Fisher and Dzhaparidze–Nikulin tests is presented. The decomposition is formally the same for both tests and is valid for any partitioning of a sample space. Vector-valued tests, components of which can be not only different scalar tests based on the same sample, but also scalar tests based on components or groups of components of the same statistic are considered. Numerical examples illustrating the idea are presented.  相似文献   

9.
One of the most famous controversies in the history of Statistics regards the number of the degrees of freedom of a chi-square test. In 1900, Pearson introduced the chi-square test for goodness of fit without recognizing that the degrees of freedom depend on the number of estimated parameters under the null hypothesis. Yule tried an ‘experimental’ approach to check the results by a short series of ‘experiments’. Nowadays, an open-source language such as R gives the opportunity to empirically check the adequateness of Pearson's arguments. Pearson paid crucial attention to the relative error, which he stated ‘will, as a rule, be small’. However, this point is fallacious, as is made evident by the simulations carried out with R. The simulations concentrate on 2×2 tables where the fallacy of the argument is most evident. Moreover, this is one of the most employed cases in the research field.  相似文献   

10.
Lifetime Data Analysis - We rigorously extend the widely used wild bootstrap resampling technique to the multivariate Nelson–Aalen estimator under Aalen’s multiplicative intensity...  相似文献   

11.
12.
13.
A simple procedure for establishing minimum sample size in X 2 goodness-of-fit tests is presented. Samples of this size will automatically satisfy Yarnold's criterion.  相似文献   

14.
The homotopy perturbation method is designed to obtain a quick and accurate solution to the Black–Scholes equation and boundary conditions for a European option pricing problem. The problem of pricing a European option can be cast a partial differential equation. The analytical solution of the equation is calculated in the form of a convergent power series with easily computable components.  相似文献   

15.
We analyze left-truncated and right-censored (LTRC) data using an additive-multiplicative Cox–Aalen model proposed by Scheike and Zhang (2002), which extends the Cox regression model as well as the additive Aalen model. Based on the conditional likelihood function, we derive the weighted least-squared (WLS) estimators for the regression parameters and cumulative intensity functions of the model. The estimators are shown to be consistent and asymptotically normal. A simulation study is conducted to investigate the performance of the proposed estimators.  相似文献   

16.
The Frisch–Waugh–Lovell (FWL) (partitioned regression) theorem is essential in regression analysis. This is partly because it is quite useful to derive theoretical results. The lasso regression and the ridge regression, both of which are penalized least-squares regressions, have become popular statistical techniques. This article describes that the FWL theorem remains valid for these penalized least-squares regressions. More precisely, we demonstrate that the covariates corresponding to unpenalized regression parameters in these penalized least-squares regression can be projected out. Some other results related to the FWL theorem in such penalized least-squares regressions are also presented.  相似文献   

17.
Abstract

The problem of obtaining the maximum probability 2 × c contingency table with fixed marginal sums, R  = (R 1R 2) and C  = (C 1, … , C c ), and row and column independence is equivalent to the problem of obtaining the maximum probability points (mode) of the multivariate hypergeometric distribution MH(R 1; C 1, … , C c ). The most simple and general method for these problems is Joe's (Joe, H. (1988 Joe, H. 1988. Extreme probabilities for contingency tables under row and column independence with application to Fisher's exact test. Commun. Statist. Theory Meth., 17(11): 36773685. [Taylor & Francis Online], [Web of Science ®] [Google Scholar]). Extreme probabilities for contingency tables under row and column independence with application to Fisher's exact test. Commun. Statist. Theory Meth. 17(11):3677–3685.) In this article we study a family of MH's in which a connection relationship is defined between its elements. Based on this family and on a characterization of the mode described in Requena and Martín (Requena, F., Martín, N. (2000 Requena, F. and Martín, N. 2000. Characterization of maximum probability points in the multivariate hypergeometric distribution. Statist. Probab. Lett., 50: 3947.  [Google Scholar]). Characterization of maximum probability points in the multivariate hypergeometric distribution. Statist. Probab. Lett. 50:39–47.), we develop a new method for the above problems, which is completely general, non recursive, very simple in practice and more efficient than the Joe's method. Also, under weak conditions (which almost always hold), the proposed method provides a simple explicit solution to these problems. In addition, the well-known expression for the mode of a hypergeometric distribution is just a particular case of the method in this article.  相似文献   

18.
The estimation of incremental cost–effectiveness ratio (ICER) has received increasing attention recently. It is expressed in terms of the ratio of the change in costs of a therapeutic intervention to the change in the effects of the intervention. Despite the intuitive interpretation of ICER as an additional cost per additional benefit unit, it is a challenge to estimate the distribution of a ratio of two stochastically dependent distributions. A vast literature regarding the statistical methods of ICER has developed in the past two decades, but none of these methods provide an unbiased estimator. Here, to obtain the unbiased estimator of the cost–effectiveness ratio (CER), the zero intercept of the bivariate normal regression is assumed. In equal sample sizes, the Iman–Conover algorithm is applied to construct the desired variance–covariance matrix of two random bivariate samples, and the estimation then follows the same approach as CER to obtain the unbiased estimator of ICER. The bootstrapping method with the Iman–Conover algorithm is employed for unequal sample sizes. Simulation experiments are conducted to evaluate the proposed method. The regression-type estimator performs overwhelmingly better than the sample mean estimator in terms of mean squared error in all cases.  相似文献   

19.
The pretest–posttest design is widely used to investigate the effect of an experimental treatment in biomedical research. The treatment effect may be assessed using analysis of variance (ANOVA) or analysis of covariance (ANCOVA). The normality assumption for parametric ANOVA and ANCOVA may be violated due to outliers and skewness of data. Nonparametric methods, robust statistics, and data transformation may be used to address the nonnormality issue. However, there is no simultaneous comparison for the four statistical approaches in terms of empirical type I error probability and statistical power. We studied 13 ANOVA and ANCOVA models based on parametric approach, rank and normal score-based nonparametric approach, Huber M-estimation, and Box–Cox transformation using normal data with and without outliers and lognormal data. We found that ANCOVA models preserve the nominal significance level better and are more powerful than their ANOVA counterparts when the dependent variable and covariate are correlated. Huber M-estimation is the most liberal method. Nonparametric ANCOVA, especially ANCOVA based on normal score transformation, preserves the nominal significance level, has good statistical power, and is robust for data distribution.  相似文献   

20.
The problem of constructing approximate confidence limits for a proportion parameter of the Pólya distribution is discussed. Three different methods for determining approximate one-sided and two-sided confidence limits for that parameter of the Pólya distribution have been proposed and compared. Particular cases of those confidence bounds are confidence intervals for the parameter of the binomial and the hypergeometric distributions.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号