首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   34篇
  免费   1篇
管理学   5篇
综合类   3篇
社会学   2篇
统计学   25篇
  2023年   1篇
  2019年   1篇
  2018年   2篇
  2017年   3篇
  2016年   1篇
  2015年   2篇
  2014年   1篇
  2013年   5篇
  2012年   4篇
  2011年   2篇
  2010年   1篇
  2009年   1篇
  2008年   1篇
  2007年   2篇
  2004年   1篇
  2002年   1篇
  2001年   1篇
  1999年   1篇
  1998年   2篇
  1994年   1篇
  1993年   1篇
排序方式: 共有35条查询结果,搜索用时 0 毫秒
11.
In this article, we deal with a two-parameter exponentiated half-logistic distribution. We consider the estimation of unknown parameters, the associated reliability function and the hazard rate function under progressive Type II censoring. Maximum likelihood estimates (M LEs) are proposed for unknown quantities. Bayes estimates are derived with respect to squared error, linex and entropy loss functions. Approximate explicit expressions for all Bayes estimates are obtained using the Lindley method. We also use importance sampling scheme to compute the Bayes estimates. Markov Chain Monte Carlo samples are further used to produce credible intervals for the unknown parameters. Asymptotic confidence intervals are constructed using the normality property of the MLEs. For comparison purposes, bootstrap-p and bootstrap-t confidence intervals are also constructed. A comprehensive numerical study is performed to compare the proposed estimates. Finally, a real-life data set is analysed to illustrate the proposed methods of estimation.  相似文献   
12.
In a multilevel model for complex survey data, the weight‐inflated estimators of variance components can be biased. We propose a resampling method to correct this bias. The performance of the bias corrected estimators is studied through simulations using populations generated from a simple random effects model. The simulations show that, without lowering the precision, the proposed procedure can reduce the bias of the estimators, especially for designs that are both informative and have small cluster sizes. Application of these resampling procedures to data from an artificial workplace survey provides further evidence for the empirical value of this method. The Canadian Journal of Statistics 40: 150–171; 2012 © 2012 Statistical Society of Canada  相似文献   
13.
Wild Bootstrapping in Finite Populations with Auxiliary Information   总被引:1,自引:0,他引:1  
Consider a finite population u , which can be viewed as a realization of a super-population model. A simple ratio model (linear regression, without intercept) with heteroscedastic errors is supposed to have generated u . A random sample is drawn without replacement from u . In this set-up a two-stage wild bootstrap resampling scheme as well as several other useful forms of bootstrapping in finite populations will be considered. Some asymptotic results for various bootstrap approximations for normalized and Studentized versions of the well-known ratio and regression estimator are given. Bootstrap based confidence interval s for the population total and for the regression parameter of the underlying ratio model are also discussed  相似文献   
14.
Nonparametric bootstrapping for hierarchical data is relatively underdeveloped and not straightforward: certainly it does not make sense to use simple nonparametric resampling, which treats all observations as independent. We have provided some resampling strategies of hierarchical data, proved that the strategy of nonparametric bootstrapping on the highest level (randomly sampling all other levels without replacement within the highest level selected by randomly sampling the highest levels with replacement) is better than that on lower levels, analyzed real data and performed simulation studies.  相似文献   
15.
The bootstrap principle is justified for. robust M-estimates in regression, (A short proof justifying bootstrapping the empirical process is also given.)  相似文献   
16.
在DEA框架下测算新疆兵团农业在特定环境压力下的生态效率,并采用截断回归和自举估计法分析农业生态效率的影响因素.研究结果表明:新疆兵团农业的生态效率很低,在特定环境压力下团场之间的生态效率差异很小,管理活动中生态无效率和技术无效率密切相关.资源过度消耗和污染物过量排放是新疆兵团农业生态效率较低的主要原因,因此,依靠农业科技进步,提高劳动者素质,转变农业发展方式是提高兵团农业生态效率的重要途径.  相似文献   
17.
Confidence intervals for a single parameter are spanned by quantiles of a confidence distribution, and one‐sided p‐values are cumulative confidences. Confidence distributions are thus a unifying format for representing frequentist inference for a single parameter. The confidence distribution, which depends on data, is exact (unbiased) when its cumulative distribution function evaluated at the true parameter is uniformly distributed over the unit interval. A new version of the Neyman–Pearson lemma is given, showing that the confidence distribution based on the natural statistic in exponential models with continuous data is less dispersed than all other confidence distributions, regardless of how dispersion is measured. Approximations are necessary for discrete data, and also in many models with nuisance parameters. Approximate pivots might then be useful. A pivot based on a scalar statistic determines a likelihood in the parameter of interest along with a confidence distribution. This proper likelihood is reduced of all nuisance parameters, and is appropriate for meta‐analysis and updating of information. The reduced likelihood is generally different from the confidence density. Confidence distributions and reduced likelihoods are rooted in Fisher–Neyman statistics. This frequentist methodology has many of the Bayesian attractions, and the two approaches are briefly compared. Concepts, methods and techniques of this brand of Fisher–Neyman statistics are presented. Asymptotics and bootstrapping are used to find pivots and their distributions, and hence reduced likelihoods and confidence distributions. A simple form of inverting bootstrap distributions to approximate pivots of the abc type is proposed. Our material is illustrated in a number of examples and in an application to multiple capture data for bowhead whales.  相似文献   
18.
Multivariate control charts are powerful and simple visual tools for monitoring the quality of a process. This multivariate monitoring is carried out by considering simultaneously several correlated quality characteristics and by determining whether these characteristics are in control or out of control. In this paper, we propose a robust methodology using multivariate quality control charts for subgroups based on generalized Birnbaum–Saunders distributions and an adapted Hotelling statistic. This methodology is constructed for Phases I and II of control charts. We estimate the corresponding parameters with the maximum likelihood method and use parametric bootstrapping to obtain the distribution of the adapted Hotelling statistic. In addition, we consider the Mahalanobis distance to detect multivariate outliers and use it to assess the adequacy of the distributional assumption. A Monte Carlo simulation study is conducted to evaluate the proposed methodology and to compare it with a standard methodology. This study reports the good performance of our methodology. An illustration with real-world air quality data of Santiago, Chile, is provided. This illustration shows that the methodology is useful for alerting early episodes of extreme air pollution, thus preventing adverse effects on human health.  相似文献   
19.
Identifying mediators in variable chains as part of a causal mediation analysis can shed light on issues of causation, assessment, and intervention. However, coefficients and effect sizes in a causal mediation analysis are nearly always small. This can lead those less familiar with the approach to reject the results of causal mediation analysis. The current paper highlights five factors that contribute to small path coefficients in mediation research: loss of information when measuring relationships across time, controlling for prior levels of a predicted variable, adding control variables to the analysis, ignoring measurement error in one’s variables, and using multiple mediators. It is argued that these issues are best handled by increasing the statistical power of the analysis, identifying the optimal temporal interval between variables, using bootstrapped confidence intervals to analyze the results, and finding alternate ways of assessing the meaningfulness of the indirect effect.  相似文献   
20.
Spatial econometric models estimated on the big geo-located point data have at least two problems: limited computational capabilities and inefficient forecasting for the new out-of-sample geo-points. This is because of spatial weights matrix W defined for in-sample observations only and the computational complexity. Machine learning models suffer the same when using kriging for predictions; thus this problem still remains unsolved. The paper presents a novel methodology for estimating spatial models on big data and predicting in new locations. The approach uses bootstrap and tessellation to calibrate both model and space. The best bootstrapped model is selected with the PAM (Partitioning Around Medoids) algorithm by classifying the regression coefficients jointly in a nonindependent manner. Voronoi polygons for the geo-points used in the best model allow for a representative space division. New out-of-sample points are assigned to tessellation tiles and linked to the spatial weights matrix as a replacement for an original point what makes feasible usage of calibrated spatial models as a forecasting tool for new locations. There is no trade-off between forecast quality and computational efficiency in this approach. An empirical example illustrates a model for business locations and firms' profitability.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号