首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In connection with assessing how an ongoing development in fisheries management may change fishing activity, evaluation of Total Factor Productivity (TFP) change over a period, including efficiency, scale and technology changes, is an important tool. The Malmquist index, based on distance functions evaluated with Data Envelopment Analysis (DEA), is often employed to estimate TFP changes. DEA is generally gaining attention for evaluating efficiency and capacity in fisheries. One main criticism of DEA is that it does not have any statistical foundation, i.e. that it is not possible to make inference about DEA scores or related parameters. The bootstrap method for estimating confidence intervals of deterministic parameters can however be applied to estimate confidence intervals for DEA scores. This method is applied in the present paper for assessing TFP changes between 1987 and 1999 for the fleet of Danish seiners operating in the North Sea and the Skagerrak.  相似文献   

2.
In this article, we develop four explicit asymptotic two-sided confidence intervals for the difference between two Poisson rates via a hybrid method. The basic idea of the proposed method is to estimate or recover the variances of the two Poisson rate estimates, which are required for constructing the confidence interval for the rate difference, from the confidence limits for the two individual Poisson rates. The basic building blocks of the approach are reliable confidence limits for the two individual Poisson rates. Four confidence interval estimators that have explicit solutions and good coverage levels are employed: the first normal with continuity correction, Rao score, Freeman and Tukey, and Jeffreys confidence intervals. Using simulation studies, we examine the performance of the four hybrid confidence intervals and compare them with three existing confidence intervals: the non-informative prior Bayes confidence interval, the t confidence interval based on Satterthwait's degrees of freedom, and the Bayes confidence interval based on Student's t confidence coefficient. Simulation results show that the proposed hybrid Freeman and Tukey, and the hybrid Jeffreys confidence intervals can be highly recommended because they outperform the others in terms of coverage probabilities and widths. The other methods tend to be too conservative and produce wider confidence intervals. The application of these confidence intervals are illustrated with three real data sets.  相似文献   

3.
Diagnostic techniques are proposed for assessing the influence of individual cases on confidence intervals in nonlinear regression. The technique proposed uses the method of profile t-plots applied to the case-deletion model. The effect of the geometry of the statistical model on the influence measures is assessed, and an algorithm for computing case-deleted confidence intervals is described. This algorithm provides a direct method for constructing a simple diagnostic measure based on the ratio of the lengths of confidence intervals. The generalization of these methods to multiresponse models is discussed.  相似文献   

4.
For the analysis of survey-weighted categorical data, one recommended method of analysis is a log-rate model. For each cell in a contingency table, the survey weights are averaged across subjects and incorporated into an offset for a loglinear model. Supposedly, one can then proceed with the analysis of unweighted observed cell counts. We provide theoretical and simulation-based evidence to show that the log-rate analysis is not an effective statistical analysis method and should not be used in general. The root of the problem is in its failure to properly account for variability in the individual weights within cells of a contingency table. This results in goodness-of-fit tests that have higher-than-nominal error rates and confidence intervals for odds ratios that have lower-than-nominal coverage.  相似文献   

5.
In this paper, the hypothesis testing and interval estimation for the intraclass correlation coefficients are considered in a two-way random effects model with interaction. Two particular intraclass correlation coefficients are described in a reliability study. The tests and confidence intervals for the intraclass correlation coefficients are developed when the data are unbalanced. One approach is based on the generalized p-value and generalized confidence interval, the other is based on the modified large-sample idea. These two approaches simplify to the ones in Gilder et al. [2007. Confidence intervals on intraclass correlation coefficients in a balanced two-factor random design. J. Statist. Plann. Inference 137, 1199–1212] when the data are balanced. Furthermore, some statistical properties of the generalized confidence intervals are investigated. Finally, some simulation results to compare the performance of the modified large-sample approach with that of the generalized approach are reported. The simulation results indicate that the modified large-sample approach performs better than the generalized approach in the coverage probability and expected length of the confidence interval.  相似文献   

6.
A bootstrap based method to construct 1−α simultaneous confidence intervals for relative effects in the one-way layout is presented. This procedure takes the stochastic correlation between the test statistics into account and results in narrower simultaneous confidence intervals than the application of the Bonferroni correction. Instead of using the bootstrap distribution of a maximum statistic, the coverage of the confidence intervals for the individual comparisons are adjusted iteratively until the overall confidence level is reached. Empirical coverage and power estimates of the introduced procedure for many-to-one comparisons are presented and compared with asymptotic procedures based on the multivariate normal distribution.  相似文献   

7.
In this paper, we consider a statistical model for the drug concentration–time profiles that are obtained in a pharmacokinetic (PK) study when the drug is orally administered. In the proposed statistical PK model, the subject-specific concentration–time curve is described by the one-compartment PK model with first-order absorption and elimination. Moreover, a multivariate generalized gamma distribution is developed for the joint distribution of the drug concentrations that are repeatedly measured from the same subject. We then construct confidence intervals for the subject–exposure parameters which provide a further insight into the individual exposure of the drug under study. The proposed statistical PK model and the associated inference are then applied to illustrate a real data set. A simulation study is also implemented to investigate the performances of the coverage probability and expected length of the proposed confidence intervals. Finally, we give conclusions and discussions on the application of the proposed procedures.  相似文献   

8.
This paper considers the statistical analysis for competing risks model under the Type-I progressively hybrid censoring from a Weibull distribution. We derive the maximum likelihood estimates and the approximate maximum likelihood estimates of the unknown parameters. We then use the bootstrap method to construct the confidence intervals. Based on the non informative prior, a sampling algorithm using the acceptance–rejection sampling method is presented to obtain the Bayes estimates, and Monte Carlo method is employed to construct the highest posterior density credible intervals. The simulation results are provided to show the effectiveness of all the methods discussed here and one data set is analyzed.  相似文献   

9.
Eunju Hwang 《Statistics》2017,51(4):844-861
This paper studies the stationary bootstrap applicability for realized covariations of high frequency asynchronous financial data. The stationary bootstrap method, which is characterized by a block-bootstrap with random block length, is applied to estimate the integrated covariations. The bootstrap realized covariance, bootstrap realized regression coefficient and bootstrap realized correlation coefficient are proposed, and the validity of the stationary bootstrapping for them is established both for large sample and for finite sample. Consistencies of bootstrap distributions are established, which provide us valid stationary bootstrap confidence intervals. The bootstrap confidence intervals do not require a consistent estimator of a nuisance parameter arising from nonsynchronous unequally spaced sampling while those based on a normal asymptotic theory require a consistent estimator. A Monte-Carlo comparison reveals that the proposed stationary bootstrap confidence intervals have better coverage probabilities than those based on normal approximation.  相似文献   

10.
Asymptotic approaches are traditionally used to calculate confidence intervals for intraclass correlation coefficient in a clustered binary study. When sample size is small to medium, or correlation or response rate is near the boundary, asymptotic intervals often do not have satisfactory performance with regard to coverage. We propose using the importance sampling method to construct the profile confidence limits for the intraclass correlation coefficient. Importance sampling is a simulation based approach to reduce the variance of the estimated parameter. Four existing asymptotic limits are used as statistical quantities for sample space ordering in the importance sampling method. Simulation studies are performed to evaluate the performance of the proposed accurate intervals with regard to coverage and interval width. Simulation results indicate that the accurate intervals based on the asymptotic limits by Fleiss and Cuzick generally have shorter width than others in many cases, while the accurate intervals based on Zou and Donner asymptotic limits outperform others when correlation and response rate are close to their boundaries.  相似文献   

11.
The conventional confidence interval for the intraclass correlation coefficient assumes equal-tail probabilities. In general, the equal-tail probability interval is biased and other interval procedures should be considered. Unbiased confidence intervals for the intraclass correlation coefficient are readily available. The equal-tail probability and unbiased intervals have exact coverage as they are constructed using the pivotal quantity method. In this article, confidence intervals for the intraclass correlation coefficient are built using balanced and unbalanced one-way random effects models. The expected length of confidence intervals serves as a tool to compare the two procedures. The unbiased confidence interval outperforms the equal-tail probability interval if the intraclass correlation coefficient is small and the equal-tail probability interval outperforms the unbiased interval if the intraclass correlation coefficient is large.  相似文献   

12.
Confidence intervals provide a way to determine plausible values for a population parameter. They are omnipresent in research articles involving statistical analyses. Appropriately, a key statistical literacy learning objective is the ability to interpret and understand confidence intervals in a wide range of settings. As instructors, we devote a considerable amount of time and effort to ensure that students master this topic in introductory courses and beyond. Yet, studies continue to find that confidence intervals are commonly misinterpreted and that even experts have trouble calibrating their individual confidence levels. In this article, we present a 10-min trivia game-based activity that addresses these misconceptions by exposing students to confidence intervals from a personal perspective. We describe how the activity can be integrated into a statistics course as a one-time activity or with repetition at intervals throughout a course, discuss results of using the activity in class, and present possible extensions. Supplementary materials for this article are available online.  相似文献   

13.
Proportion differences are often used to estimate and test treatment effects in clinical trials with binary outcomes. In order to adjust for other covariates or intra-subject correlation among repeated measures, logistic regression or longitudinal data analysis models such as generalized estimating equation or generalized linear mixed models may be used for the analyses. However, these analysis models are often based on the logit link which results in parameter estimates and comparisons in the log-odds ratio scale rather than in the proportion difference scale. A two-step method is proposed in the literature to approximate the calculation of confidence intervals for the proportion difference using a concept of effective sample sizes. However, the performance of this two-step method has not been investigated in their paper. On this note, we examine the properties of the two-step method and propose an adjustment to the effective sample size formula based on Bayesian information theory. Simulations are conducted to evaluate the performance and to show that the modified effective sample size improves the coverage property of the confidence intervals.  相似文献   

14.
In this paper we develop a Bayesian approach to detecting unit roots in autoregressive panel data models. Our method is based on the comparison of stationary autoregressive models with and without individual deterministic trends, to their counterpart models with a unit autoregressive root. This is done under cross-sectional dependence among the error terms of the panel units. Simulation experiments are conducted with the aim to assess the performance of the suggested inferential procedure, as well as to investigate if the Bayesian model comparison approach can distinguish unit root models from stationary autoregressive models under cross-sectional dependence. The approach is applied to real exchange rate series for a panel of the G7 countries and to a panel of US nominal interest rates data.  相似文献   

15.
This paper overviews some recent developments in panel data asymptotics, concentrating on the nonstationary panel case and gives a new result for models with individual effects. Underlying recent theory are asymptotics for multi-indexed processes in which both indexes may pass to infinity. We review some of the new limit theory that has been developed, show how it can be applied and give a new interpretation of individual effects in nonstationary panel data. Fundamental to the interpretation of much of the asymptotics is the concept of a panel regression coefficient which measures the long run average relation across a section of the panel. This concept is analogous to the statistical interpretation of the coefficient in a classical regression relation. A variety of nonstationary panel data models are discussed and the paper reviews the asymptotic properties of estimators in these various models. Some recent developments in panel unit root tests and stationary dynamic panel regression models are also reviewed.  相似文献   

16.
Nonstationary panel data analysis: an overview of some recent developments   总被引:2,自引:0,他引:2  
This paper overviews some recent developments in panel data asymptotics, concentrating on the nonstationary panel case and gives a new result for models with individual effects. Underlying recent theory are asymptotics for multi-indexed processes in which both indexes may pass to infinity. We review some of the new limit theory that has been developed, show how it can be applied and give a new interpretation of individual effects in nonstationary panel data. Fundamental to the interpretation of much of the asymptotics is the concept of a panel regression coefficient which measures the long run average relation across a section of the panel. This concept is analogous to the statistical interpretation of the coefficient in a classical regression relation. A variety of nonstationary panel data models are discussed and the paper reviews the asymptotic properties of estimators in these various models. Some recent developments in panel unit root tests and stationary dynamic panel regression models are also reviewed.  相似文献   

17.
This paper considers the statistical analysis of masked data in a series system with Burr-XII distributed components. Based on progressively Type-I interval censored sample, the maximum likelihood estimators for the parameters are obtained by using the expectation maximization algorithm, and the associated approximate confidence intervals are also derived. In addition, Gibbs sampling procedure using important sampling is applied for obtaining the Bayesian estimates of the parameters, and Monte Carlo method is employed to construct the credible intervals. Finally, a simulation study is proposed to illustrate the efficiency of the methods under different removal schemes and masking probabilities.  相似文献   

18.
We consider the problem of simultaneously estimating Poisson rate differences via applications of the Hsu and Berger stepwise confidence interval method (termed HBM), where comparisons to a common reference group are performed. We discuss continuity-corrected confidence intervals (CIs) and investigate the HBM performance with a moment-based CI, and uncorrected and corrected for continuity Wald and Pooled confidence intervals (CIs). Using simulations, we compare nine individual CIs in terms of coverage probability and the HBM with nine intervals in terms of family-wise error rate (FWER) and overall and local power. The simulations show that these statistical properties depend highly on parameter settings.  相似文献   

19.
农业险定价中的核心问题是农业风险区划问题,为了体现农业区划中个体指标的动态发展特征,根据近邻传播改进自适应近邻传播聚类方法对数据进行优化,基于轮廓系数、归属度和吸引度得到最佳聚类中心和几何聚类中心,并将聚类转化为新数据集的聚类问题;选取代表性的棉花为例进行实证分析,通过计算生产、销售、收入、财政等指标进行棉花风险区划实例分析,计算最优棉花风险区划,结果表明对于具有动态特征的数据,本模型具有很好的有效性、实用性和解释性。  相似文献   

20.
This article shows how to use any correlation coefficient to produce an estimate of location and scale. It is part of a broader system, called a correlation estimation system (CES), that uses correlation coefficients as the starting point for estimations. The method is illustrated using the well-known normal distribution. This article shows that any correlation coefficient can be used to fit a simple linear regression line to bivariate data and then the slope and intercept are estimates of standard deviation and location. Because a robust correlation will produce robust estimates, this CES can be recommended as a tool for everyday data analysis. Simulations indicate that the median with this method using a robust correlation coefficient appears to be nearly as efficient as the mean with good data and much better if there are a few errant data points. Hypothesis testing and confidence intervals are discussed for the scale parameter; both normal and Cauchy distributions are covered.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号