首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 22 毫秒
1.
The generalized bootstrap is a parametric bootstrap method in which the underlying distribution function is estimated by fitting a generalized lambda distribution to the observed data. In this study, the generalized bootstrap is compared with the traditional parametric and non-parametric bootstrap methods in estimating the quantiles at different levels, especially for high quantiles. The performances of the three methods are evaluated in terms of cover rate, average interval width and standard deviation of width of the 95% bootstrap confidence intervals. Simulation results showed that the generalized bootstrap has overall better performance than the non-parametric bootstrap in high quantile estimation.  相似文献   

2.
In this work, we analyze the long-range dependence parameter for a nucleotide sequence in several different transformations. The long-range dependence parameter is estimated by the approximated maximum likelihood method, by a novel estimator based on the spectral envelope theory, by a regression method based on the periodogram function, and also by the detrended fluctuation analysis method. We study the length distribution of coding and noncoding regions for all Homo sapiens chromosomes available from the European Bioinformatics Institute. The parameter of the tail rate decay is estimated by the Hill estimator ?α. We show that the tail rate decay is greater than 2 for coding regions, while for almost all noncoding regions it is less than 2.  相似文献   

3.
Frequency tables are often constructed on intervals of irregular width. When plotted as bar charts, the underlying true density information may be quite distorted. The majority of introductory statistics texts recommend tabulating data into intervals of equal width, but seldom caution the consequences of failing to do so. An occasional introductory text correctly emphasizes that area rather than frequency should be plotted. Nevertheless, the correctly scaled density figure is often visually less informative than one might expect, with wide bins at constant height. In many cases, the right most bin interval has no well-defined end point, making its depiction some what arbitrary. In this note, we introduce a regular histogram approximation that matches the frequencies and also minimizes a roughness criterion for visual and exploratory appeal. The resulting estimate can reveal the density structure much more clearly. We also formulate an alternative criterion that explicitly takes account of the uncertainty in the bin frequencies.  相似文献   

4.
The use of the ARDL approach in estimating virtual exchange rates in India   总被引:2,自引:0,他引:2  
This paper applies the autoregressive distributed lag approach to cointegration analysis in estimating the 'virtual exchange rate' (VER) in India. The VER would have prevailed if the unconstrained import demand were equal to the constraint imposed due to foreign exchange rationing and the VER is used to approximate the 'price' of rationed foreign exchange reserves. We highlight the shortcomings of the existing literature in approximating equilibrium exchange rates in a less developed country such as India and propose the VER approach for equilibrium rates, which uses information from an estimated structural model. In this relationship, black market real exchange rate (E U ) is a dependent variable and real official exchange rates (E O ), the ratio of the foreign (r*) to the domestic (r) interest rate (I), and official forex reserves (Q) are explanatory variables. In our estimation, the VERs are higher than E O by about 10% in the short-run and 16% in the long-run.  相似文献   

5.
 通过建立讨价还价理论模型,本文证明了谈判过程中让步成本较小的一方获得的收益较少。由于存在预算软约束和相对严重的委托代理问题,国有企业的让步并不会给代理人造成相应损失,相应地国有企业议价能力较低。运用2006年海关进出口交易数据,使用双边随机前沿分析方法(Two-tier SFA)测度谈判底线,本文估算了中国国有企业的国际议价能力。结果显示:(1)国有企业的讨价还价能力低于民营企业和外资企业;(2)国有企业的讨价还价能力也低于进出口交易伙伴,其进口价格高于公允价格3.69%,出口价格低于公允价格6.17%。因而继续推进市场化改革,中国才能在国际市场上获取公平的贸易收益。  相似文献   

6.
Abstract. The focus of this article is on simultaneous confidence bands over a rectangular covariate region for a linear regression model with k>1 covariates, for which only conservative or approximate confidence bands are available in the statistical literature stretching back to Working & Hotelling (J. Amer. Statist. Assoc. 24 , 1929; 73–85). Formulas of simultaneous confidence levels of the hyperbolic and constant width bands are provided. These involve only a k‐dimensional integral; it is unlikely that the simultaneous confidence levels can be expressed as an integral of less than k‐dimension. These formulas allow the construction for the first time of exact hyperbolic and constant width confidence bands for at least a small k(>1) by using numerical quadrature. Comparison between the hyperbolic and constant width bands is then addressed under both the average width and minimum volume confidence set criteria. It is observed that the constant width band can be drastically less efficient than the hyperbolic band when k>1. Finally it is pointed out how the methods given in this article can be applied to more general regression models such as fixed‐effect or random‐effect generalized linear regression models.  相似文献   

7.
A number of statistical tests have been recommended over the last twenty years for assessing the randomness of long binary strings used in cryptographic algorithms. Several of these tests include methods of examining subblock patterns. These tests are the uniformity test, the universal test and the repetition test. The effectiveness of these tests are compared based on the subblock length, the limitations on data requirements, and on their power in detecting deviations from randomness. Due to the complexity of the test statistics, the power functions are estimated by simulation methods. The results show that for small subblocks the uniformity test is more powerful than the universal test, and that there is some doubt about the parameters of the hypothesised distribution for the universal test statistic. For larger subblocks the results show that the repetition test is the most effective test, since it requires far less data than either of the other two tests and is an efficient test in detecting deviations from randomness in binary strings.  相似文献   

8.
In the article, it is shown that in panel data models the Hausman test (HT) statistic can be considerably refined using the bootstrap technique. Edgeworth expansion shows that the coverage of the bootstrapped HT is second-order correct.

The asymptotic versus the bootstrapped HT are compared also by Monte Carlo simulations. At the null hypothesis and a nominal size of 0.05, the bootstrapped HT reduces the coverage error of the asymptotic HT by 10–40% of nominal size; for nominal sizes less than or equal to 0.025, the coverage error reduction is between 30% and 80% of nominal size. For the nonnull alternatives, the power of the asymptotic HT fictitiously increases by over 70% of the correct power for nominal sizes less than or equal to 0.025; the bootstrapped HT reduces overrejection to less than one fourth of its value. The advantages of the bootstrapped HT increase with the number of explanatory variables.

Heteroscedasticity or serial correlation in the idiosyncratic part of the error does not hamper advantages of the bootstrapped version of HT, if a heteroscedasticity robust version of the HT and the wild bootstrap are used. But, the power penalty is not negligible if a heteroscedasticity robust approach is used in the homoscedastic panel data model.  相似文献   

9.
In longitudinal studies of biomarkers, an outcome of interest is the time at which a biomarker reaches a particular threshold. The CD4 count is a widely used marker of human immunodeficiency virus progression. Because of the inherent variability of this marker, a single CD4 count below a relevant threshold should be interpreted with caution. Several studies have applied persistence criteria, designating the outcome as the time to the occurrence of two consecutive measurements less than the threshold. In this paper, we propose a method to estimate the time to attainment of two consecutive CD4 counts less than a meaningful threshold, which takes into account the patient‐specific trajectory and measurement error. An expression for the expected time to threshold is presented, which is a function of the fixed effects, random effects and residual variance. We present an application to human immunodeficiency virus‐positive individuals from a seroprevalent cohort in Durban, South Africa. Two thresholds are examined, and 95% bootstrap confidence intervals are presented for the estimated time to threshold. Sensitivity analysis revealed that results are robust to truncation of the series and variation in the number of visits considered for most patients. Caution should be exercised when interpreting the estimated times for patients who exhibit very slow rates of decline and patients who have less than three measurements. We also discuss the relevance of the methodology to the study of other diseases and present such applications. We demonstrate that the method proposed is computationally efficient and offers more flexibility than existing frameworks. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

10.
This article presents a constrained maximization of the Shapiro Wilk W statistic for estimating parameters of the Johnson S B distribution. The gradient of the W statistic with respect to the minimum and range parameters is used within a quasi-Newton framework to achieve a fit for all four parameters. The method is evaluated with measures of bias and precision using pseudo-random samples from three different S B populations. The population means were estimated with an average relative bias of less than 0.1% and the population standard deviations with less than 4.0% relative bias. The methodology appears promising as a tool for fitting this sometimes difficult distribution.  相似文献   

11.
We develop a theoretical model based on efficient bargaining, where both log outside productivity and log productivity in the current job follow a random walk. This setting allows the application of real option theory. We derive the efficient worker-firm separation rule. We show that wage data from completed job spells are uninformative about the true tenure profile. The model is estimated on the Panel Study of Income Dynamics. It fits the observed distribution of job tenures well. Selection of favorable random walks can account for the concavity in tenure profiles. About 80% of the estimated wage returns to tenure is due to selectivity in the realized outside productivities.  相似文献   

12.
The standard deviation of the average run length (SDARL) is an important performance metric in studying the performance of control charts with estimated in-control parameters. Only a few studies in the literature, however, have considered this measure when evaluating control chart performance. The current study aims at comparing the in-control performance of three phase II simple linear profile monitoring approaches; namely, those of Kang and Albin (2000), Kim et al. (2003), and Mahmoud et al. (2010). The comparison is performed under the assumption of estimated parameters using the SDARL metric. In general, the simulation results of the current study show that the method of Kim et al. (2003) has better overall statistical performance than the competing methods in terms of SDARL values. Some of the recommended approaches based solely on the usual average run length properties can have poor SDARL performance.  相似文献   

13.
Exact simultaneous confidence bands (SCBs) for a polynomial regression model are available only in some special situations. In this paper, simultaneous confidence levels for both hyperbolic and constant width bands for a polynomial function over a given interval are expressed as multidimensional integrals. The dimension of these integrals is equal to the degree of the polynomial. Hence the values can be calculated quickly and accurately via numerical quadrature provided that the degree of the polynomial is small (e.g. 2 or 3). This allows the construction of exact SCBs for quadratic and cubic regression functions over any given interval and for any given design matrix. Quadratic and cubic regressions are frequently used to characterise dose response relationships in addition to many other applications. Comparison between the hyperbolic and constant width bands under both the average width and minimum volume confidence set criteria shows that the constant width band can be much less efficient than the hyperbolic band. For hyperbolic bands, comparison between the exact critical constant and conservative or approximate critical constants indicates that the exact critical constant can be substantially smaller than the conservative or approximate critical constants. Numerical examples from a dose response study are used to illustrate the methods.  相似文献   

14.
我国通货膨胀率的最优目标区间几何?   总被引:1,自引:0,他引:1       下载免费PDF全文
白仲林  赵亮 《统计研究》2011,28(6):6-10
 内容提要:本文首先提出了面板数据动态门限回归模型的二阶段合并最小二乘(2SPOLS)估计方法;其次,基于中国29省市自治区1978-2008年的面板数据,对中国通货膨胀和经济增长之间关系的实证分析发现,在一定程度上,我国通货膨胀率对经济增长率的作用存在两个门限值的“双门限效应”,其门限值分别为3.2%和15.7%。所以,通货膨胀率位于(0%,3.2%]时,温和通货膨胀对经济增长率存在“托宾效应”。通货膨胀率超过3.2%时,通货膨胀率对经济增长率存在阻碍经济增长的“反托宾效应”,尤其,通货膨胀率高于15.7%后,恶性通货膨胀严重阻碍经济“软扩张”。因此,我国通货膨胀率的最优目标区间是(0%,3.2%]。  相似文献   

15.
采用非参数核函数平滑法以辽宁省、黑龙江省以及大连市的水稻、玉米和大豆三种农作物历年单位面积产量为例拟合了单产损失分布,同时利用传统的正态概率密度对区域作物单产分布进行了拟合。在拟合损失分布的基础上,分别厘定出不同保险水平农作物区域产量保险的纯保险费率。经测算发现,传统的正态概率密度下厘定的纯保险费率均低于非参数核密度下测算的纯费率,正态法低估了农作物单产的风险。保险水平在70%80%间的参数法及非参数法测算的纯保险费率均低于政策性农业保险的现行费率。另外,在数据可得的基础上,还应该确定适当的厘定保费费率的区域以充分识别风险,更精确的计算保费。  相似文献   

16.
In a study comparing the effects of two treatments, the propensity score is the probability of assignment to one treatment conditional on a subject's measured baseline covariates. Propensity-score matching is increasingly being used to estimate the effects of exposures using observational data. In the most common implementation of propensity-score matching, pairs of treated and untreated subjects are formed whose propensity scores differ by at most a pre-specified amount (the caliper width). There has been a little research into the optimal caliper width. We conducted an extensive series of Monte Carlo simulations to determine the optimal caliper width for estimating differences in means (for continuous outcomes) and risk differences (for binary outcomes). When estimating differences in means or risk differences, we recommend that researchers match on the logit of the propensity score using calipers of width equal to 0.2 of the standard deviation of the logit of the propensity score. When at least some of the covariates were continuous, then either this value, or one close to it, minimized the mean square error of the resultant estimated treatment effect. It also eliminated at least 98% of the bias in the crude estimator, and it resulted in confidence intervals with approximately the correct coverage rates. Furthermore, the empirical type I error rate was approximately correct. When all of the covariates were binary, then the choice of caliper width had a much smaller impact on the performance of estimation of risk differences and differences in means.  相似文献   

17.
A simulation study was conducted to assess how well the necessary sample size to achieve a stipulated margin of error can be estimated prior to sampling. Our concern was particularly focused on performance when sampling from a very skewed distribution, which is a common feature of many biological, economic, and other populations. We examined two approaches for estimating sample size—one being the commonly used strategy aimed at regulating the average magnitude of the stipulated margin of error and the second being a previously proposed strategy to control the tolerance probability with which the stipulated margin of error is exceeded. Results of the simulation revealed that (1) skewness does not much affect the average estimated sample size but can greatly extend the range of estimated sample sizes; and (2) skewness does reduce the effectiveness of Kupper and Hafner's sample size estimator, yet its effectiveness is negatively impacted less by skewness directly, and to a much greater degree by the common practice of estimating the population variance via a pilot sampling from the skewed population. Nonetheless, the simulations suggest that estimating sample size to control the probability with which the desired margin of error is achieved is a worthwhile alternative to the usual sample size formula that controls the average width of the confidence interval only.  相似文献   

18.
In an epidemiological study the regression slope between a response and predictor variable is underestimated when the predictor variable is measured imprecisely. Repeat measurements of the predictor in individuals in a subset of the study or in a separate study can be used to estimate a multiplicative factor to correct for this 'regression dilution bias'. In applied statistics publications various methods have been used to estimate this correction factor. Here we compare six different estimation methods and explain how they fall into two categories, namely regression and correlation-based methods. We provide new asymptotic variance formulae for the optimal correction factors in each category, when these are estimated from the repeat measurements subset alone, and show analytically and by simulation that the correlation method of choice gives uniformly lower variance. The simulations also show that, when the correction factor is not much greater than 1, this correlation method gives a correction factor which is closer to the true value than that from the best regression method on up to 80% of occasions. We also provide a variance formula for a modified correlation method which uses the standard deviation of the predictor variable in the main study; this shows further improved performance provided that the correction factor is not too extreme. A confidence interval for a corrected regression slope in an epidemiological study should reflect the imprecision of both the uncorrected slope and the estimated correction factor. We provide formulae for this and show that, particularly when the correction factor is large and the size of the subset of repeat measures is small, the effect of allowing for imprecision in the estimated correction factor can be substantial.  相似文献   

19.
唐运舒 《统计研究》2007,24(5):41-47
 本文引入养老保险金记账利率和工资增长率,测算了在不同初始缴费工资和缴费年限条件下实施“做实做小”个人账户政策对参保人养老金水平的影响。通过分析得出:1.政策调整后,养老金给付结构较调整前更能体现缴费积累与养老金水平的内在经济联系; 2.政策调整后个人养老金水平普遍不如政策调整前的水平,政策调整后养老金水平高于政策调整前均出现在缴费年限较长以及初始缴费工资较高的情况下;3.政策调整对不同的参保人影响不同。政策调整加大了低初始缴费工资人群的生活压力;拉大了男、女职工退休养老金的差距,不利于当前社会男女平等和贫富分化问题的解决。  相似文献   

20.
A new process monitoring scheme is proposed by using the Storey procedure for controlling the positive false discovery rate in multiple testing. For the 2-span control scheme, it is shown numerically that the proposed method performs better than X-bar chart in terms of the average run length. Some simulations are accomplished to evaluate the performance of the proposed scheme in terms of the average run length and the conditional expected delay. The results are compared with those of the existing monitoring schemes including the X-bar chart. The false discovery rate is also estimated and compared with the target control level.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号