首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   157篇
  免费   6篇
管理学   4篇
民族学   2篇
人口学   1篇
丛书文集   12篇
理论方法论   4篇
综合类   52篇
社会学   11篇
统计学   77篇
  2023年   1篇
  2022年   3篇
  2020年   5篇
  2019年   8篇
  2018年   8篇
  2017年   11篇
  2016年   6篇
  2015年   1篇
  2014年   5篇
  2013年   11篇
  2012年   12篇
  2011年   5篇
  2010年   11篇
  2009年   9篇
  2008年   18篇
  2007年   12篇
  2006年   5篇
  2005年   4篇
  2004年   5篇
  2003年   3篇
  2002年   5篇
  2001年   7篇
  2000年   5篇
  1999年   1篇
  1998年   1篇
  1990年   1篇
排序方式: 共有163条查询结果,搜索用时 15 毫秒
111.
112.
Sensitivity analysis is an essential tool in the development of robust models for engineering, physical sciences, economics and policy-making, but typically requires running the model a large number of times in order to estimate sensitivity measures. While statistical emulators allow sensitivity analysis even on complex models, they only perform well with a moderately low number of model inputs: in higher dimensional problems they tend to require a restrictively high number of model runs unless the model is relatively linear. Therefore, an open question is how to tackle sensitivity problems in higher dimensionalities, at very low sample sizes. This article examines the relative performance of four sampling-based measures which can be used in such high-dimensional nonlinear problems. The measures tested are the Sobol' total sensitivity indices, the absolute mean of elementary effects, a derivative-based global sensitivity measure, and a modified derivative-based measure. Performance is assessed in a ‘screening’ context, by assessing the ability of each measure to identify influential and non-influential inputs on a wide variety of test functions at different dimensionalities. The results show that the best-performing measure in the screening context is dependent on the model or function, but derivative-based measures have a significant potential at low sample sizes that is currently not widely recognised.  相似文献   
113.
Control charts show the distinction between the random and assignable causes of variation in a process. The real process may be affected by many characteristics and several assignable causes. Therefore, the economic statistical design of multiple control chart under Burr XII shock model with multiple assignable causes can be an appropriate candidate model. In this paper, we develop a cost model based on the optimization of the average cost per unit of time. Indeed, the cost model under the influence of a single match case assignable cause and multiple assignable causes under a same cost and time parameters were compared. Besides, a sensitivity analysis was also presented in which the changeability of loss-cost and design parameters were evaluated based on the changes in cost, time and Burr XII distribution parameters.  相似文献   
114.
115.
开展女大学生学业辅导工作是提升女大学生培养质量的重要途径。基于女大学生不同的学业状况,引入权变视角,可以利用霍桑效应提高学业有困难的女大学生的学业成绩,根据共生效应原理提升学业优秀的女大学生的整体素养,运用参与改变理论提高更多女大学生的学业质量,从而有的放矢地开展女大学生学业辅导工作,不断满足女大学生学业发展和成长成才需求。  相似文献   
116.
The literature describing operations research in the community is somewhat of a puzzle. On the one hand, several authors have denigrated the use of traditional operations approaches in addressing community problems, yet several studies document successful applications. Arguing that the operations research mindset is itself a great strength, we will review several examples where operations research methods have been employed creatively to the benefit of the community and beyond.  相似文献   
117.
This paper introduces W-tests for assessing homogeneity in mixtures of discrete probability distributions. A W-test statistic depends on the data solely through parameter estimators and, if a penalized maximum likelihood estimation framework is used, has a tractable asymptotic distribution under the null hypothesis of homogeneity. The large-sample critical values are quantiles of a chi-square distribution multiplied by an estimable constant for which we provide an explicit formula. In particular, the estimation of large-sample critical values does not involve simulation experiments or random field theory. We demonstrate that W-tests are generally competitive with a benchmark test in terms of power to detect heterogeneity. Moreover, in many situations, the large-sample critical values can be used even with small to moderate sample sizes. The main implementation issue (selection of an underlying measure) is thoroughly addressed, and we explain why W-tests are well-suited to problems involving large and online data sets. Application of a W-test is illustrated with an epidemiological data set.  相似文献   
118.
Logistic functions are used in different applications, including biological growth studies and assay data analysis. Locally D-optimal designs for logistic models with three and four parameters are investigated. It is shown that these designs are minimally supported. Efficiencies are computed for equally spaced and uniform designs.  相似文献   
119.
This article develops test statistics for the homogeneity of the means of several treatment groups of count data in the presence of over-dispersion or under-dispersion when there is no likelihood available. The C(α)C(α) or score type tests based on the models that are specified by only the first two moments of the counts are obtained using quasi-likelihood, extended quasi-likelihood, and double extended quasi-likelihood. Monte Carlo simulations are then used to study the comparative behavior of these C(α)C(α) statistics compared to the C(α)C(α) statistic based on a parametric model, namely, the negative binomial model, in terms of the following: size; power; robustness for departures from the data distribution as well as dispersion homogeneity. These simulations demonstrate that the C(α)C(α) statistic based on the double extended quasi-likelihood holds the nominal size at the 5% level well in all data situations, and it shows some edge in power over the other statistics, and, in particular, it performs much better than the commonly used statistic based on the quasi-likelihood. This C(α)C(α) statistic also shows robustness for moderate heterogeneity due to dispersion. Finally, applications to ecological, toxicological and biological data are given.  相似文献   
120.
The main interest of prediction intervals lies in the results of a future sample from a previously sampled population. In this article, we develop procedures for the prediction intervals which contain all of a fixed number of future observations for general balanced linear random models. Two methods based on the concept of a generalized pivotal quantity (GPQ) and one based on ANOVA estimators are presented. A simulation study using the balanced one-way random model is conducted to evaluate the proposed methods. It is shown that one of the two GPQ-based and the ANOVA-based methods are computationally more efficient and they also successfully maintain the simulated coverage probabilities close to the nominal confidence level. Hence, they are recommended for practical use. In addition, one example is given to illustrate the applicability of the recommended methods.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号