首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   24篇
  免费   3篇
管理学   3篇
统计学   24篇
  2021年   1篇
  2019年   3篇
  2018年   1篇
  2017年   1篇
  2016年   2篇
  2015年   2篇
  2014年   2篇
  2013年   3篇
  2012年   4篇
  2011年   2篇
  2009年   1篇
  2008年   1篇
  2004年   2篇
  2002年   1篇
  2001年   1篇
排序方式: 共有27条查询结果,搜索用时 15 毫秒
11.
This article considers the uncertainty of a proportion based on a stratified random sample of a small population. Using the hypergeometric distribution, a Clopper–Pearson type upper confidence bound is presented. Another frequentist approach that uses the estimated variance of the proportion estimator is also considered as well as a Bayesian alternative. These methods are demonstrated with an illustrative example. Some aspects of planning, that is, the impact of specified strata sample sizes, on uncertainty are studied through a simulation study.  相似文献   
12.
Confidence intervals for a single parameter are spanned by quantiles of a confidence distribution, and one‐sided p‐values are cumulative confidences. Confidence distributions are thus a unifying format for representing frequentist inference for a single parameter. The confidence distribution, which depends on data, is exact (unbiased) when its cumulative distribution function evaluated at the true parameter is uniformly distributed over the unit interval. A new version of the Neyman–Pearson lemma is given, showing that the confidence distribution based on the natural statistic in exponential models with continuous data is less dispersed than all other confidence distributions, regardless of how dispersion is measured. Approximations are necessary for discrete data, and also in many models with nuisance parameters. Approximate pivots might then be useful. A pivot based on a scalar statistic determines a likelihood in the parameter of interest along with a confidence distribution. This proper likelihood is reduced of all nuisance parameters, and is appropriate for meta‐analysis and updating of information. The reduced likelihood is generally different from the confidence density. Confidence distributions and reduced likelihoods are rooted in Fisher–Neyman statistics. This frequentist methodology has many of the Bayesian attractions, and the two approaches are briefly compared. Concepts, methods and techniques of this brand of Fisher–Neyman statistics are presented. Asymptotics and bootstrapping are used to find pivots and their distributions, and hence reduced likelihoods and confidence distributions. A simple form of inverting bootstrap distributions to approximate pivots of the abc type is proposed. Our material is illustrated in a number of examples and in an application to multiple capture data for bowhead whales.  相似文献   
13.
Recently, Ong and Mukerjee [Probability matching priors for two-sided tolerance intervals in balanced one-way and two-way nested random effects models. Statistics. 2011;45:403–411] developed two-sided Bayesian tolerance intervals, with approximate frequentist validity, for a future observation in balanced one-way and two-way nested random effects models. These were obtained using probability matching priors (PMP). On the other hand, Krishnamoorthy and Lian [Closed-form approximate tolerance intervals for some general linear models and comparison studies. J Stat Comput Simul. 2012;82:547–563] studied closed-form approximate tolerance intervals by the modified large-sample (MLS) approach. We compare the performances of these two approaches for normal as well as non-normal error distributions. Monte Carlo simulation methods are used to evaluate the resulting tolerance intervals with regard to achieved confidence levels and expected widths. It turns out that PMP tolerance intervals are less conservative for data with large number of classes and small number of observations per class and the MLS procedure is preferable for smaller sample sizes.  相似文献   
14.
ABSTRACT

In statistical practice, inferences on standardized regression coefficients are often required, but complicated by the fact that they are nonlinear functions of the parameters, and thus standard textbook results are simply wrong. Within the frequentist domain, asymptotic delta methods can be used to construct confidence intervals of the standardized coefficients with proper coverage probabilities. Alternatively, Bayesian methods solve similar and other inferential problems by simulating data from the posterior distribution of the coefficients. In this paper, we present Bayesian procedures that provide comprehensive solutions for inferences on the standardized coefficients. Simple computing algorithms are developed to generate posterior samples with no autocorrelation and based on both noninformative improper and informative proper prior distributions. Simulation studies show that Bayesian credible intervals constructed by our approaches have comparable and even better statistical properties than their frequentist counterparts, particularly in the presence of collinearity. In addition, our approaches solve some meaningful inferential problems that are difficult if not impossible from the frequentist standpoint, including identifying joint rankings of multiple standardized coefficients and making optimal decisions concerning their sizes and comparisons. We illustrate applications of our approaches through examples and make sample R functions available for implementing our proposed methods.  相似文献   
15.
This paper investigates the focused information criterion and plug-in average for vector autoregressive models with local-to-zero misspecification. These methods have the advantage of focusing on a quantity of interest rather than aiming at overall model fit. Any (su?ciently regular) function of the parameters can be used as a quantity of interest. We determine the asymptotic properties and elaborate on the role of the locally misspecified parameters. In particular, we show that the inability to consistently estimate locally misspecified parameters translates into suboptimal selection and averaging. We apply this framework to impulse response analysis. A Monte Carlo simulation study supports our claims.  相似文献   
16.
Studies of risk perceived using continuous scales of [0,100] were recently introduced in psychometrics, which can be transformed to the unit interval, but the presence of zeros or ones are commonly observed. Motivated by this, we introduce a full inferential set of tools that allows for augmented and limited data modeling. We considered parameter estimation, residual analysis, influence diagnostic and model selection for zero-and/or-one augmented beta rectangular (ZOABR) regression models and their particular nested models, which is based on a new parameterization of the beta rectangular distribution. Different from other alternatives, we performed maximum-likelihood estimation using a combination of the EM algorithm (for the continuous part) and Fisher scoring algorithm (for the discrete part). Also, we perform an additional step, by considering other link functions, besides the usual logistic link, for modeling the response mean. By considering randomized quantile residuals, (local) influence diagnostics and model selection tools, we identified that the ZOABR regression model is the best one. We also conducted extensive simulations studies, which indicate that all developed tools work properly. Finally, we discuss the use of this type of models to treat psychometric data. It is worthwhile to mention that applications of the developed methods go beyond to Psychometric data. Indeed, they can be useful when the response variable in bounded, including or not the respective limits.  相似文献   
17.
A large‐sample approximation of the posterior distribution of partially identified structural parameters is derived for models that can be indexed by an identifiable finite‐dimensional reduced‐form parameter vector. It is used to analyze the differences between Bayesian credible sets and frequentist confidence sets. We define a plug‐in estimator of the identified set and show that asymptotically Bayesian highest‐posterior‐density sets exclude parts of the estimated identified set, whereas it is well known that frequentist confidence sets extend beyond the boundaries of the estimated identified set. We recommend reporting estimates of the identified set and information about the conditional prior along with Bayesian credible sets. A numerical illustration for a two‐player entry game is provided.  相似文献   
18.
Configural Frequency Analysis (CFA) asks whether a cell in a cross-classification contains more or fewer cases than expected with respect to some base model. This base model is specified such that cells with more cases than expected (also called types) can be interpreted from a substantive perspective. The same applies to cells with fewer cases than expected (antitypes). This article gives an introduction to both frequentist and Bayesian approaches to CFA. Specification of base models, testing, and protection are discussed. In an example, Prediction CFA and two-sample CFA are illustrated. The discussion focuses on the differences between CFA and modelling.  相似文献   
19.
Panel count data arise in many fields and a number of estimation procedures have been developed along with two procedures for variable selection. In this paper, we discuss model selection and parameter estimation together. For the former, a focused information criterion (FIC) is presented and for the latter, a frequentist model average (FMA) estimation procedure is developed. A main advantage, also the difference from the existing model selection methods, of the FIC is that it emphasizes the accuracy of the estimation of the parameters of interest, rather than all parameters. Further efficiency gain can be achieved by the FMA estimation procedure as unlike existing methods, it takes into account the variability in the stage of model selection. Asymptotic properties of the proposed estimators are established, and a simulation study conducted suggests that the proposed methods work well for practical situations. An illustrative example is also provided. © 2014 Board of the Foundation of the Scandinavian Journal of Statistics  相似文献   
20.
Consider a linear regression model with independent normally distributed errors. Suppose that the scalar parameter of interest is a specified linear combination of the components of the regression parameter vector. Also suppose that we have uncertain prior information that a parameter vector, consisting of specified distinct linear combinations of these components, takes a given value. Part of our evaluation of a frequentist confidence interval for the parameter of interest is the scaled expected length, defined to be the expected length of this confidence interval divided by the expected length of the standard confidence interval for this parameter, with the same confidence coefficient. We say that a confidence interval for the parameter of interest utilizes this uncertain prior information if (a) the scaled expected length of this interval is substantially less than one when the prior information is correct, (b) the maximum value of the scaled expected length is not too large and (c) this confidence interval reverts to the standard confidence interval, with the same confidence coefficient, when the data happen to strongly contradict the prior information. We present a new confidence interval for a scalar parameter of interest, with specified confidence coefficient, that utilizes this uncertain prior information. A factorial experiment with one replicate is used to illustrate the application of this new confidence interval.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号