首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In connection with assessing how an ongoing development in fisheries management may change fishing activity, evaluation of Total Factor Productivity (TFP) change over a period, including efficiency, scale and technology changes, is an important tool. The Malmquist index, based on distance functions evaluated with Data Envelopment Analysis (DEA), is often employed to estimate TFP changes. DEA is generally gaining attention for evaluating efficiency and capacity in fisheries. One main criticism of DEA is that it does not have any statistical foundation, i.e. that it is not possible to make inference about DEA scores or related parameters. The bootstrap method for estimating confidence intervals of deterministic parameters can however be applied to estimate confidence intervals for DEA scores. This method is applied in the present paper for assessing TFP changes between 1987 and 1999 for the fleet of Danish seiners operating in the North Sea and the Skagerrak.  相似文献   

2.
We study methods to estimate regression and variance parameters for over-dispersed and correlated count data from highly stratified surveys. Our application involves counts of fish catches from stratified research surveys and we propose a novel model in fisheries science to address changes in survey protocols. A challenge with this model is the large number of nuisance parameters which leads to computational issues and biased statistical inferences. We use a computationally efficient profile generalized estimating equation method and compare it to marginal maximum likelihood (MLE) and restricted MLE (REML) methods. We use REML to address bias and inaccurate confidence intervals because of many nuisance parameters. The marginal MLE and REML approaches involve intractable integrals and we used a new R package that is designed for estimating complex nonlinear models that may include random effects. We conclude from simulation analyses that the REML method provides more reliable statistical inferences among the three methods we investigated.  相似文献   

3.
The asymptotic results pertaining to the distribution of the log-likelihood ratio allow for the creation of a confidence region, which is a general extension of the confidence interval. Two- and three-dimensional regions can be displayed visually to describe the plausible region of the parameters of interest simultaneously. While most advanced statistical textbooks on inference discuss these asymptotic confidence regions, there is no exploration of how to numerically compute these regions for graphical purposes. This article demonstrates the application of a simple trigonometric transformation to compute two- and three-dimensional confidence regions; we transform the Cartesian coordinates of the parameters to create what we call the radial profile log-likelihood. The method is applicable to any distribution with a defined likelihood function, so it is not limited to specific data distributions or model paradigms. We describe the method along with the algorithm, follow with an example of our method, and end with an examination of computation time. Supplementary materials for this article are available online.  相似文献   

4.
The role of statistics in quality and productivity improvement depends on certain philosophical issues that the author believes have been inadequately addressed. Three such issues are as follows: (1) what is the role of statistics in the process of investigation and discovery; (2) how can we extrapolate results from the particular to the general; and (3) how can we evaluate possible management changes so that they truly benefit an organization? Therefore, statistical methods appropriate to investigation and discovery are discussed as distinct from those appropriate to the testing of an already discovered solution. It is shown how the manner in which the tentative solution has been arrived at determines the assurance with which experimental conclusions can be extrapolated to the application in mind. Whether or not statistical methods and training can have any impact depends on the system of management. A vector representation which can help predict the consequences of changes in management strategy is discussed. This can help to realign policies so that members of an organization can better work together for the benefit of the organization.  相似文献   

5.
Issues that are central to the understanding and management of the HIV epidemic have generated numerous statistical challenges. This paper considers questions concerning the incubation period, the effects of treatments, pre diction of AIDS cases, the choice of surrogate end points for the assessment of treatments and design of strategies for screening blood samples. These issues give rise to a broad range of intriguing problems for statisticians. We describe some of these problems, how they have been tackled so far and what remains to be done. The discussion touches on topical statistical methods such as smoothing, bootstrapping, interval censoring and the ill-posed inverse problem, as well as asking fundamental questions for frequentist statistics.  相似文献   

6.
State‐space models (SSMs) are now popular tools in fisheries science for providing management advice when faced with noisy survey and commercial fishery data. Such models are often fitted within a Bayesian framework requiring both the specification of prior distributions for model parameters and simulation‐based approaches for inference. Here we present a frequentist framework as a viable alternative and recommend using the Laplace approximation with automatic differentiation, as implemented in the R package Template Model Builder, for fast fitting and reliable inference. Additionally we highlight some identifiability issues associated with SSMs that fisheries scientists should be aware of and demonstrate how our modelling strategy surmounts these problems. Using the Bay of Fundy sea scallop fishery we show that our implementation yields more conservative advice than that of the reference model. The Canadian Journal of Statistics 47: 27–45; 2019 © 2018 Statistical Society of Canada  相似文献   

7.
We adapt existing statistical modeling techniques for social networks to study consumption data observed in trophic food webs. These data describe the feeding volume (non-negative) among organisms grouped into nodes, called trophic species, that form the food web. Model complexity arises due to the extensive amount of zeros in the data, as each node in the web is predator/prey to only a small number of other trophic species. Many of the zeros are regarded as structural (non-random) in the context of feeding behavior. The presence of basal prey and top predator nodes (those who never consume and those who are never consumed, with probability 1) creates additional complexity to the statistical modeling. We develop a special statistical social network model to account for such network features. The model is applied to two empirical food webs; focus is on the web for which the population size of seals is of concern to various commercial fisheries.  相似文献   

8.
In Stein's 1959 example, for any sample with n sufficiently large, there is a confidence set embedded simultaneously within two regular confidence belts—one with coverage frequency smaller than an arbitrary positive ϵ, the other with coverage frequency larger than 1 — ϵ. Thus, Stein's example may be seen as an extreme case of mutually conflicting confidence statements, illustrating a possibility anticipated and denounced by Fisher.  相似文献   

9.
In this article we present a technique for implementing large-scale optimal portfolio selection. We use high-frequency daily data to capture valuable statistical information in asset returns. We describe several statistical issues involved in quantitative approaches to portfolio selection. Our methodology applies to large-scale portfolio-selection problems in which the number of possible holdings is large relative to the estimation period provided by historical data. We illustrate our approach on an equity database that consists of stocks from the Standard and Poor's index, and we compare our portfolios to this benchmark index. Our methodology differs from the usual quadratic programming approach to portfolio selection in three ways: (1) We employ informative priors on the expected returns and variance-covariance matrices, (2) we use daily data for estimation purposes, with upper and lower holding limits for individual securities, and (3) we use a dynamic asset-allocation approach that is based on reestimating and then rebalancing the portfolio weights on a prespecified time window. The key inputs to the optimization process are the predictive distributions of expected returns and the predictive variance-covariance matrix. We describe the statistical issues involved in modeling these inputs for high-dimensional portfolio problems in which our data frequency is daily. In our application, we find that our optimal portfolio outperforms the underlying benchmark.  相似文献   

10.
Bioequivalence (BE) is required for approving a generic drug. The two one‐sided tests procedure (TOST, or the 90% confidence interval approach) has been used as the mainstream methodology to test average BE (ABE) on pharmacokinetic parameters such as the area under the blood concentration‐time curve and the peak concentration. However, for highly variable drugs (%CV > 30%), it is difficult to demonstrate ABE in a standard cross‐over study with the typical number of subjects using the TOST because of lack of power. Recently, the US Food and Drug Administration and the European Medicines Agency recommended similar but not identical reference‐scaled average BE (RSABE) approaches to address this issue. Although the power is improved, the new approaches may not guarantee a high level of confidence for the true difference between two drugs at the ABE boundaries. It is also difficult for these approaches to address the issues of population BE (PBE) and individual BE (IBE). We advocate the use of a likelihood approach for representing and interpreting BE data as evidence. Using example data from a full replicate 2 × 4 cross‐over study, we demonstrate how to present evidence using the profile likelihoods for the mean difference and standard deviation ratios of the two drugs for the pharmacokinetic parameters. With this approach, we present evidence for PBE and IBE as well as ABE within a unified framework. Our simulations show that the operating characteristics of the proposed likelihood approach are comparable with the RSABE approaches when the same criteria are applied. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

11.
I describe how developments over the past 25 years in computing, funding, personnel, purpose, and training have affected academic statistical consulting centers and discuss how these developments and trends point to a range of potential futures. At one extreme, academic statistical consulting centers fail to adapt to competition from other disciplines in an increasingly fragmented market for statistical consulting and spiral downward toward irrelevancy and extinction. At the other extreme, purpose-driven academic statistical consulting centers constantly increase their impact in a virtuous cycle, leading the way toward the profession of statistics having greater positive impact on society. I conclude with actions to take to assure a robust future and increased impact for academic statistical consulting centers.  相似文献   

12.
The issue of normalization arises whenever two different values for a vector of unknown parameters imply the identical economic model. A normalization implies not just a rule for selecting which among equivalent points to call the maximum likelihood estimate (MLE), but also governs the topography of the set of points that go into a small-sample confidence interval associated with that MLE. A poor normalization can lead to multimodal distributions, disjoint confidence intervals, and very misleading characterizations of the true statistical uncertainty. This paper introduces an identification principle as a framework upon which a normalization should be imposed, according to which the boundaries of the allowable parameter space should correspond to loci along which the model is locally unidentified. We illustrate these issues with examples taken from mixture models, structural vector autoregressions, and cointegration models.  相似文献   

13.
This article introduces principles of learning based on research in cognitive science that help explain how learning works. We adapt these principles to the teaching of statistical practice and illustrate the application of these principles to the curricular design of a new master's degree program in applied statistics. We emphasize how these principles can be used not only to improve instruction at the course level but also at the program level.  相似文献   

14.
We consider the problem of statistical inference on the parameters of the three parameter power function distribution based on a full unordered sample of observations or a type II censored ordered sample of observations. The inference philosophy used is the theory of structural inference. We state inference procedures which yield inferential statements about the three unknown parameters. A numerical example is given to illustrate these procedures. It is seen that within the context of this example the inference procedures of this paper do not encounter certain difficulties associated with classical maximum likelihood based procedures. Indeed it has been our numerical experience that this behavior is typical within the context of that subclass of the three parameter power function distribution to which this example belongs.  相似文献   

15.
Confidence intervals provide a way to determine plausible values for a population parameter. They are omnipresent in research articles involving statistical analyses. Appropriately, a key statistical literacy learning objective is the ability to interpret and understand confidence intervals in a wide range of settings. As instructors, we devote a considerable amount of time and effort to ensure that students master this topic in introductory courses and beyond. Yet, studies continue to find that confidence intervals are commonly misinterpreted and that even experts have trouble calibrating their individual confidence levels. In this article, we present a 10-min trivia game-based activity that addresses these misconceptions by exposing students to confidence intervals from a personal perspective. We describe how the activity can be integrated into a statistics course as a one-time activity or with repetition at intervals throughout a course, discuss results of using the activity in class, and present possible extensions. Supplementary materials for this article are available online.  相似文献   

16.
This paper is concerned with testing and dating structural breaks in the dependence structure of multivariate time series. We consider a cumulative sum (CUSUM) type test for constant copula-based dependence measures, such as Spearman''s rank correlation and quantile dependencies. The asymptotic null distribution is not known in closed form and critical values are estimated by an i.i.d. bootstrap procedure. We analyze size and power properties in a simulation study under different dependence measure settings, such as skewed and fat-tailed distributions. To date breakpoints and to decide whether two estimated break locations belong to the same break event, we propose a pivot confidence interval procedure. Finally, we apply the test to the historical data of 10 large financial firms during the last financial crisis from 2002 to mid-2013.  相似文献   

17.
Relative potency estimations in both multiple parallel-line and slope-ratio assays involve construction of simultaneous confidence intervals for ratios of linear combinations of general linear model parameters. The key problem here is that of determining multiplicity adjusted percentage points of a multivariate t-distribution, the correlation matrix R of which depends on the unknown relative potency parameters. Several methods have been proposed in the literature on how to deal with R . In this article, we introduce a method based on an estimate of R (also called the plug-in approach) and compare it with various methods including conservative procedures based on probability inequalities. Attention is restricted to parallel-line assays though the theory is applicable for any ratios of coefficients in the general linear model. Extension of the plug-in method to linear mixed effect models is also discussed. The methods will be compared with respect to their simultaneous coverage probabilities via Monte Carlo simulations. We also evaluate the methods in terms of confidence interval width through application to data on multiple parallel-line assay.  相似文献   

18.
Many mathematical models involve input parameters, which are not precisely known. Global sensitivity analysis aims to identify the parameters whose uncertainty has the largest impact on the variability of a quantity of interest (output of the model). One of the statistical tools used to quantify the influence of each input variable on the output is the Sobol sensitivity index. We consider the statistical estimation of this index from a finite sample of model outputs. We study asymptotic and non-asymptotic properties of two estimators of Sobol indices. These properties are applied to significance tests and estimation by confidence intervals.  相似文献   

19.
We revisit the question about optimal performance of goodness-of-fit tests based on sample spacings. We reveal the importance of centering of the test-statistic and of the sample size when choosing a suitable test-statistic from a family of statistics based on power transformations of sample spacings. In particular, we find that a test-statistic based on empirical estimation of the Hellinger distance between hypothetical and data-supported distribution does possess some optimality properties for moderate sample sizes. These findings confirm earlier statements about the robust behaviour of the test-statistic based on the Hellinger distance and are in contrast to findings about the asymptotic (when sample size approaches infinity) of statistics such as Moran's and/or Greenwood's statistic. We include simulation results that support our findings.  相似文献   

20.
An example is given of a uniformly most accurate unbiased confidence belt which yields absurd confidence statements with 100% occurrence. In several known examples, as well as in the 100%-occurrence counterexample, an optimal confidence belt provides absurd statements because it is inclusion-inconsistent with either a null or an all-inclusive belt or both. It is concluded that confidence-theory optimality criteria alone are inadequate for practice, and that a consistency criterion is required. An approach based upon inclusion consistency of belts [C(x) C C C(x), for some x, implies γ ≤ γ for confidence coefficients] is suggested for exact interval estimation in continuous parametric models. Belt inclusion consistency, the existence of a proper-pivotal vector [a pivotal vector T(X, θ) such that the effective range of T(x,.) is independent of x], and the existence of a confidence distribution are proven mutually equivalent. This consistent approach being restrictive, it is shown, using Neyman's anomalous 1954 example, how to determine whether any given parametric function can be estimated consistently and exactly or whether a consistent nonexact solution must be attempted.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号