首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   245篇
  免费   12篇
  国内免费   2篇
管理学   4篇
人口学   3篇
丛书文集   6篇
理论方法论   1篇
综合类   34篇
社会学   10篇
统计学   201篇
  2023年   2篇
  2022年   3篇
  2021年   5篇
  2020年   10篇
  2019年   11篇
  2018年   10篇
  2017年   18篇
  2016年   14篇
  2015年   15篇
  2014年   13篇
  2013年   34篇
  2012年   30篇
  2011年   7篇
  2010年   10篇
  2009年   9篇
  2008年   9篇
  2007年   8篇
  2006年   13篇
  2005年   10篇
  2004年   5篇
  2003年   2篇
  2002年   5篇
  2001年   3篇
  2000年   3篇
  1999年   3篇
  1998年   1篇
  1997年   3篇
  1996年   1篇
  1995年   1篇
  1993年   1篇
排序方式: 共有259条查询结果,搜索用时 0 毫秒
211.
Multiple imputation has emerged as a widely used model-based approach in dealing with incomplete data in many application areas. Gaussian and log-linear imputation models are fairly straightforward to implement for continuous and discrete data, respectively. However, in missing data settings which include a mix of continuous and discrete variables, correct specification of the imputation model could be a daunting task owing to the lack of flexible models for the joint distribution of variables of different nature. This complication, along with accessibility to software packages that are capable of carrying out multiple imputation under the assumption of joint multivariate normality, appears to encourage applied researchers for pragmatically treating the discrete variables as continuous for imputation purposes, and subsequently rounding the imputed values to the nearest observed category. In this article, I introduce a distance-based rounding approach for ordinal variables in the presence of continuous ones. The first step of the proposed rounding process is predicated upon creating indicator variables that correspond to the ordinal levels, followed by jointly imputing all variables under the assumption of multivariate normality. The imputed values are then converted to the ordinal scale based on their Euclidean distances to a set of indicators, with minimal distance corresponding to the closest match. I compare the performance of this technique to crude rounding via commonly accepted accuracy and precision measures with simulated data sets.  相似文献   
212.
By employing all the observed information and the optimal augmentation term, we propose an augmented inverse probability weighted fractional imputation method (AFI) to handle covariates missing at random in quantile regression. Compared with the existing completely case analysis, inverse probability weighting, multiple imputation and fractional imputation based on quantile regression model with missing covarites, we carry out simulation study to investigate its performance in estimation accuracy and efficiency, computational efficiency and estimation robustness. We also talk about the influence of imputation replicates in our AFI. Finally, we apply our methodology to part of the National Health and Nutrition Examination Survey data.  相似文献   
213.
Abstract

In longitudinal studies data are collected on the same set of units for more than one occasion. In medical studies it is very common to have mixed Poisson and continuous longitudinal data. In such studies, for different reasons, some intended measurements might not be available resulting in a missing data setting. When the probability of missingness is related to the missing values, the missingness mechanism is termed nonrandom. The stochastic expectation-maximization (SEM) algorithm and the parametric fractional imputation (PFI) method are developed to handle nonrandom missingness in mixed discrete and continuous longitudinal data assuming different covariance structures for the continuous outcome. The proposed techniques are evaluated using simulation studies. Also, the proposed techniques are applied to the interstitial cystitis data base (ICDB) data.  相似文献   
214.
There are two generations of Gibbs sampling methods for semiparametric models involving the Dirichlet process. The first generation suffered from a severe drawback: the locations of the clusters, or groups of parameters, could essentially become fixed, moving only rarely. Two strategies that have been proposed to create the second generation of Gibbs samplers are integration and appending a second stage to the Gibbs sampler wherein the cluster locations are moved. We show that these same strategies are easily implemented for the sequential importance sampler, and that the first strategy dramatically improves results. As in the case of Gibbs sampling, these strategies are applicable to a much wider class of models. They are shown to provide more uniform importance sampling weights and lead to additional Rao-Blackwellization of estimators.  相似文献   
215.
Adaptive Spatial Sampling of Contaminated Soil   总被引:1,自引:0,他引:1  
Cox  Louis Anthony 《Risk analysis》1999,19(6):1059-1069

Suppose that a residential neighborhood may have been contaminated by a nearby abandoned hazardous waste site. The suspected contamination consists of elevated soil concentrations of chemicals that are also found in the absence of site-related contamination. How should a risk manager decide which residential properties to sample and which ones to clean? This paper introduces an adaptive spatial sampling approach which uses initial observations to guide subsequent search. Unlike some recent model-based spatial data analysis methods, it does not require any specific statistical model for the spatial distribution of hazards, but instead constructs an increasingly accurate nonparametric approximation to it as sampling proceeds. Possible cost-effective sampling and cleanup decision rules are described by decision parameters such as the number of randomly selected locations used to initialize the process, the number of highest-concentration locations searched around, the number of samples taken at each location, a stopping rule, and a remediation action threshold. These decision parameters are optimized by simulating the performance of each decision rule. The simulation is performed using the data collected so far to impute multiple probable values of unknown soil concentration distributions during each simulation run. This optimized adaptive spatial sampling technique has been applied to real data using error probabilities for wrongly cleaning or wrongly failing to clean each location (compared to the action that would be taken if perfect information were available) as evaluation criteria. It provides a practical approach for quantifying trade-offs between these different types of errors and expected cost. It also identifies strategies that are undominated with respect to all of these criteria.

  相似文献   
216.
In the past, many clinical trials have withdrawn subjects from the study when they prematurely stopped their randomised treatment and have therefore only collected ‘on‐treatment’ data. Thus, analyses addressing a treatment policy estimand have been restricted to imputing missing data under assumptions drawn from these data only. Many confirmatory trials are now continuing to collect data from subjects in a study even after they have prematurely discontinued study treatment as this event is irrelevant for the purposes of a treatment policy estimand. However, despite efforts to keep subjects in a trial, some will still choose to withdraw. Recent publications for sensitivity analyses of recurrent event data have focused on the reference‐based imputation methods commonly applied to continuous outcomes, where imputation for the missing data for one treatment arm is based on the observed outcomes in another arm. However, the existence of data from subjects who have prematurely discontinued treatment but remained in the study has now raised the opportunity to use this ‘off‐treatment’ data to impute the missing data for subjects who withdraw, potentially allowing more plausible assumptions for the missing post‐study‐withdrawal data than reference‐based approaches. In this paper, we introduce a new imputation method for recurrent event data in which the missing post‐study‐withdrawal event rate for a particular subject is assumed to reflect that observed from subjects during the off‐treatment period. The method is illustrated in a trial in chronic obstructive pulmonary disease (COPD) where the primary endpoint was the rate of exacerbations, analysed using a negative binomial model.  相似文献   
217.
This study investigates the formation of endogamous and exogamous marriages among immigrants and their descendants in the United Kingdom. We apply event history analysis to data from the Understanding Society study and use multiple imputation to determine the type of marriage for individuals with missing information on the origin of their spouse. The analysis shows, first, significant differences among immigrants and their descendants in the likelihood of marrying within and outside their ethnic groups. While immigrants from European countries have relatively high exogamous marriage rates, South Asians exhibit a high likelihood of marrying a partner from their own ethnic group; Caribbean people hold an intermediate position. Second, the descendants of immigrants have lower endogamous and higher exogamous marriage rates than their parents; however, for some ethnic groups, particularly South Asians, the differences across generations are small, suggesting that changes in marriage patterns have been slower than expected.  相似文献   
218.
Missing data in clinical trials are inevitable. We highlight the ICH guidelines and CPMP points to consider on missing data. Specifically, we outline how we should consider missing data issues when designing, planning and conducting studies to minimize missing data impact. We also go beyond the coverage of the above two documents, provide a more detailed review of the basic concepts of missing data and frequently used terminologies, and examples of the typical missing data mechanism, and discuss technical details and literature for several frequently used statistical methods and associated software. Finally, we provide a case study where the principles outlined in this paper are applied to one clinical program at protocol design, data analysis plan and other stages of a clinical trial.  相似文献   
219.
侵权法的归责问题是侵权法的核心问题,理论界对此争论较多。文章对我国现有侵权归责原则理论进行分析,认为有关理论存在法理和认识上的缺陷,作者提出了自己一些观点,并建议将我国侵权法归责原则确立为过错责任和严格责任的统一。  相似文献   
220.
基于经济增长理论,提取质量要素构建“技术进步-资本增长-质量提升”的多指标多原因结构方程模型,并利用蒙特卡洛-贝叶斯插补法测算1998-2017年的质量提升率及其对经济增长的贡献率。研究发现:第一,创新会拉动质量提升,质量提升会提高产品异质性进而形成高质量贸易关系,显著促进对外贸易额增长。但会增加能源消耗,亟待转变能源结构;第二,质量提升率均值5.11%,呈现阶梯式上升态势,良好的国际贸易环境有利于要素流通、要素质量提升,从而获取更高水平的要素收入。国家宏观调控措施在保障要素供给质量、提高配置效率和均衡供需结构方面发挥重要作用;第三,质量提升率对经济增长的贡献率最大值16.82%、均值8.84%,低于技术进步和资本增长的贡献率,但趋势上存在交互影响和互补作用,质量提升已经成为高质量发展的重要显性表达。  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号