首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   413篇
  免费   16篇
  国内免费   4篇
管理学   28篇
人口学   3篇
丛书文集   10篇
理论方法论   2篇
综合类   68篇
社会学   10篇
统计学   312篇
  2023年   3篇
  2022年   4篇
  2021年   6篇
  2020年   13篇
  2019年   16篇
  2018年   15篇
  2017年   27篇
  2016年   19篇
  2015年   22篇
  2014年   22篇
  2013年   64篇
  2012年   48篇
  2011年   13篇
  2010年   21篇
  2009年   10篇
  2008年   15篇
  2007年   16篇
  2006年   16篇
  2005年   14篇
  2004年   14篇
  2003年   5篇
  2002年   11篇
  2001年   10篇
  2000年   6篇
  1999年   4篇
  1998年   5篇
  1997年   4篇
  1996年   2篇
  1995年   1篇
  1994年   2篇
  1993年   2篇
  1987年   1篇
  1986年   1篇
  1980年   1篇
排序方式: 共有433条查询结果,搜索用时 12 毫秒
131.
Multiple imputation (MI) is now a reference solution for handling missing data. The default method for MI is the Multivariate Normal Imputation (MNI) algorithm that is based on the multivariate normal distribution. In the presence of longitudinal ordinal missing data, where the Gaussian assumption is no longer valid, application of the MNI method is questionable. This simulation study compares the performance of the MNI and ordinal imputation regression model for incomplete longitudinal ordinal data for situations covering various numbers of categories of the ordinal outcome, time occasions, sample sizes, rates of missingness, well-balanced, and skewed data.  相似文献   
132.
A controlled donor imputation system for a one-number census   总被引:1,自引:1,他引:0  
Summary. The 2001 UK census was a one-number census. An integral part of such a process has been the creation of a transparent census database that has been adjusted for the underenumeration in the 2001 census. The methodology for creating this database is based on a controlled donor imputation system that imputes individuals and households estimated to have been missed in the census. This paper describes this methodology and provides results from a statistical assessment of its performance using data that realistically simulate the census process.  相似文献   
133.
针对非正态响应的部分因子试验,当筛选试验所涉及的因子数目较大时,提出了基于广义线性模型(generalized linear models,GLM)的贝叶斯变量与模型选择方法.首先,针对模型参数的不确定性,选择了经验贝叶斯先验.其次,在广义线性模型的线性预测器中对每个变量设置了二元变量指示器,并建立起变量指示器与模型指示器之间的转换关系.然后,利用变量指示器与模型指示器的后验概率来识别显著性因子与选择最佳模型.最后,以实际的工业案例说明此方法能够有效地识别非正态响应部分因子试验的显著性因子.  相似文献   
134.
文章在响应变量随机缺失下,基于分位数回归研究了半参数模型的稳健估计问题。首先基于B样条基函数近似技术,将模型非参数函数的估计问题转化为样条系数向量估计问题;其次,在响应变量随机缺失下,提出了一种新的插补方法,对缺失的响应变量进行多重插补;再次,基于插补后的数据集,构造出新的分位数目标函数,得到模型非参数函数以及参数向量的稳健估计;最后给出了有效算法计算多重插补估计量。通过模拟研究验证了所提方法的有效性和稳健性。  相似文献   
135.
We present a systematic approach to the practical and comprehensive handling of missing data motivated by our experiences of analyzing longitudinal survey data. We consider the Health 2000 and 2011 Surveys (BRIF8901) where increased non-response and non-participation from 2000 to 2011 was a major issue. The model assumptions involved in the complex sampling design, repeated measurements design, non-participation mechanisms and associations are presented graphically using methodology previously defined as a causal model with design, i.e. a functional causal model extended with the study design. This tool forces the statistician to make the study design and the missing-data mechanism explicit. Using the systematic approach, the sampling probabilities and the participation probabilities can be considered separately. This is beneficial when the performance of missing-data methods are to be compared. Using data from Health 2000 and 2011 Surveys and from national registries, it was found that multiple imputation removed almost all differences between full sample and estimated prevalences. The inverse probability weighting removed more than half and the doubly robust method 60% of the differences. These findings are encouraging since decreasing participation rates are a major problem in population surveys worldwide.  相似文献   
136.
Item non‐response in surveys occurs when some, but not all, variables are missing. Unadjusted estimators tend to exhibit some bias, called the non‐response bias, if the respondents differ from the non‐respondents with respect to the study variables. In this paper, we focus on item non‐response, which is usually treated by some form of single imputation. We examine the properties of doubly robust imputation procedures, which are those that lead to an estimator that remains consistent if either the outcome variable or the non‐response mechanism is adequately modelled. We establish the double robustness property of the imputed estimator of the finite population distribution function under random hot‐deck imputation within classes. We also discuss the links between our approach and that of Chambers and Dunstan. The results of a simulation study support our findings.  相似文献   
137.
In multiple imputation (MI), the resulting estimates are consistent if the imputation model is correct. To specify the imputation model, it is recommended to combine two sets of variables: those that are related to the incomplete variable and those that are related to the missingness mechanism. Several possibilities exist, but it is not clear how they perform in practice. The method that simply groups all variables together into the imputation model and four other methods that are based on the propensity scores are presented. Two of them are new and have not been used in the context of MI. The performance of the methods is investigated by a simulation study under different missing at random mechanisms for different types of variables. We conclude that all methods, except for one method based on the propensity scores, perform well. It turns out that as long as the relevant variables are taken into the imputation model, the form of the imputation model has only a minor effect in the quality of the imputations.  相似文献   
138.
A cure rate model is a survival model incorporating the cure rate with the assumption that the population contains both uncured and cured individuals. It is a powerful statistical tool for prognostic studies, especially in cancer. The cure rate is important for making treatment decisions in clinical practice. The proportional hazards (PH) cure model can predict the cure rate for each patient. This contains a logistic regression component for the cure rate and a Cox regression component to estimate the hazard for uncured patients. A measure for quantifying the predictive accuracy of the cure rate estimated by the Cox PH cure model is required, as there has been a lack of previous research in this area. We used the Cox PH cure model for the breast cancer data; however, the area under the receiver operating characteristic curve (AUC) could not be estimated because many patients were censored. In this study, we used imputation‐based AUCs to assess the predictive accuracy of the cure rate from the PH cure model. We examined the precision of these AUCs using simulation studies. The results demonstrated that the imputation‐based AUCs were estimable and their biases were negligibly small in many cases, although ordinary AUC could not be estimated. Additionally, we introduced the bias‐correction method of imputation‐based AUCs and found that the bias‐corrected estimate successfully compensated the overestimation in the simulation studies. We also illustrated the estimation of the imputation‐based AUCs using breast cancer data. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   
139.
The standard approach to construct nonparametric tolerance intervals is to use the appropriate order statistics, provided a minimum sample size requirement is met. However, it is well-known that this traditional approach is conservative with respect to the nominal level. One way to improve the coverage probabilities is to use interpolation. However, the extension to the case of two-sided tolerance intervals, as well as for the case when the minimum sample size requirement is not met, have not been studied. In this paper, an approach using linear interpolation is proposed for improving coverage probabilities for the two-sided setting. In the case when the minimum sample size requirement is not met, coverage probabilities are shown to improve by using linear extrapolation. A discussion about the effect on coverage probabilities and expected lengths when transforming the data is also presented. The applicability of this approach is demonstrated using three real data sets.  相似文献   
140.
Time trend resistant fractional factorial experiments have often been based on regular fractionated designs where several algorithms exist for sequencing their runs in minimum number of factor-level changes (i.e. minimum cost) such that main effects and/or two-factor interactions are orthogonal to and free from aliasing with the time trend, which may be present in the sequentially generated responses. On the other hand, only one algorithm exists for sequencing runs of the more economical non-regular fractional factorial experiments, namely Angelopoulos et al. [1 P. Angelopoulos, H. Evangelaras, and C. Koukouvinos, Run orders for efficient two-level experimental plans with minimum factor level changes robust to time trends, J. Statist. Plann. Inference 139 (2009), pp. 37183724. doi: 10.1016/j.jspi.2009.05.002[Crossref], [Web of Science ®] [Google Scholar]]. This research studies sequential factorial experimentation under non-regular fractionated designs and constructs a catalog of 8 minimum cost linear trend-free 12-run designs (of resolution III) in 4 up to 11 two-level factors by applying the interactions-main effects assignment technique of Cheng and Jacroux [3 C.S. Cheng and M. Jacroux, The construction of trend-free run orders of two-level factorial designs, J. Amer. Statist. Assoc. 83 (1988), pp. 11521158. doi: 10.1080/01621459.1988.10478713[Taylor &; Francis Online], [Web of Science ®] [Google Scholar]] on the standard 12-run Plackett–Burman design, where factor-level changes between runs are minimal and where main effects are orthogonal to the linear time trend. These eight 12-run designs are non-orthogonal but are more economical than the linear trend-free designs of Angelopoulos et al. [1 P. Angelopoulos, H. Evangelaras, and C. Koukouvinos, Run orders for efficient two-level experimental plans with minimum factor level changes robust to time trends, J. Statist. Plann. Inference 139 (2009), pp. 37183724. doi: 10.1016/j.jspi.2009.05.002[Crossref], [Web of Science ®] [Google Scholar]], where they can accommodate larger number of two-level factors in smaller number of experimental runs. These non-regular designs are also more economical than many regular trend-free designs. The following will be provided for each proposed systematic design:
  • (1) The run order in minimum number of factor-level changes.

  • (2) The total number of factor-level changes between the 12 runs (i.e. the cost).

  • (3) The closed-form least-squares contrast estimates for all main effects as well as their closed-form variance–covariance structure.

In addition, combined designs of each of these 8 designs that can be generated by either complete or partial foldover allow for the estimation of two-factor interactions involving one of the factors (i.e. the most influential).  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号