首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   280篇
  免费   2篇
管理学   3篇
丛书文集   3篇
理论方法论   2篇
综合类   8篇
社会学   19篇
统计学   247篇
  2021年   1篇
  2020年   4篇
  2019年   8篇
  2018年   14篇
  2017年   26篇
  2016年   6篇
  2015年   6篇
  2014年   11篇
  2013年   81篇
  2012年   15篇
  2011年   9篇
  2010年   9篇
  2009年   10篇
  2008年   7篇
  2007年   9篇
  2006年   13篇
  2005年   8篇
  2004年   5篇
  2003年   2篇
  2002年   7篇
  2001年   5篇
  2000年   5篇
  1999年   7篇
  1998年   7篇
  1997年   3篇
  1995年   1篇
  1993年   1篇
  1984年   1篇
  1979年   1篇
排序方式: 共有282条查询结果,搜索用时 15 毫秒
221.
Logistic regression plays an important role in many fields. In practice, we often encounter missing covariates in different applied sectors, particularly in biomedical sciences. Ibrahim (1990) proposed a method to handle missing covariates in generalized linear model (GLM) setup. It is well known that logistic regression estimates using small or medium sized missing data are biased. Considering the missing data that are missing at random, in this paper we have reduced the bias by two methods; first we have derived a closed form bias expression using Cox and Snell (1968), and second we have used likelihood based modification similar to Firth (1993). Here we have analytically shown that the Firth type likelihood modification in Ibrahim led to the second order bias reduction. The proposed methods are simple to apply on an existing method, need no analytical work, with the exception of a little change in the optimization function. We have carried out extensive simulation studies comparing the methods, and our simulation results are also supported by a real world data.  相似文献   
222.
Summary.  The paper discusses the estimation of an unknown population size n . Suppose that an identification mechanism can identify n obs cases. The Horvitz–Thompson estimator of n adjusts this number by the inverse of 1− p 0, where the latter is the probability of not identifying a case. When repeated counts of identifying the same case are available, we can use the counting distribution for estimating p 0 to solve the problem. Frequently, the Poisson distribution is used and, more recently, mixtures of Poisson distributions. Maximum likelihood estimation is discussed by means of the EM algorithm. For truncated Poisson mixtures, a nested EM algorithm is suggested and illustrated for several application cases. The algorithmic principles are used to show an inequality, stating that the Horvitz–Thompson estimator of n by using the mixed Poisson model is always at least as large as the estimator by using a homogeneous Poisson model. In turn, if the homogeneous Poisson model is misspecified it will, potentially strongly, underestimate the true population size. Examples from various areas illustrate this finding.  相似文献   
223.
Analysis of familial aggregation in the presence of varying family sizes   总被引:2,自引:0,他引:2  
Summary.  Family studies are frequently undertaken as the first step in the search for genetic and/or environmental determinants of disease. Significant familial aggregation of disease is suggestive of a genetic aetiology for the disease and may lead to more focused genetic analysis. Of course, it may also be due to shared environmental factors. Many methods have been proposed in the literature for the analysis of family studies. One model that is appealing for the simplicity of its computation and the conditional interpretation of its parameters is the quadratic exponential model. However, a limiting factor in its application is that it is not reproducible , meaning that all families must be of the same size. To increase the applicability of this model, we propose a hybrid approach in which analysis is based on the assumption of the quadratic exponential model for a selected family size and combines a missing data approach for smaller families with a marginalization approach for larger families. We apply our approach to a family study of colorectal cancer that was sponsored by the Cancer Genetics Network of the National Institutes of Health. We investigate the properties of our approach in simulation studies. Our approach applies more generally to clustered binary data.  相似文献   
224.
We propose a method for estimating parameters in generalized linear models with missing covariates and a non-ignorable missing data mechanism. We use a multinomial model for the missing data indicators and propose a joint distribution for them which can be written as a sequence of one-dimensional conditional distributions, with each one-dimensional conditional distribution consisting of a logistic regression. We allow the covariates to be either categorical or continuous. The joint covariate distribution is also modelled via a sequence of one-dimensional conditional distributions, and the response variable is assumed to be completely observed. We derive the E- and M-steps of the EM algorithm with non-ignorable missing covariate data. For categorical covariates, we derive a closed form expression for the E- and M-steps of the EM algorithm for obtaining the maximum likelihood estimates (MLEs). For continuous covariates, we use a Monte Carlo version of the EM algorithm to obtain the MLEs via the Gibbs sampler. Computational techniques for Gibbs sampling are proposed and implemented. The parametric form of the assumed missing data mechanism itself is not `testable' from the data, and thus the non-ignorable modelling considered here can be viewed as a sensitivity analysis concerning a more complicated model. Therefore, although a model may have `passed' the tests for a certain missing data mechanism, this does not mean that we have captured, even approximately, the correct missing data mechanism. Hence, model checking for the missing data mechanism and sensitivity analyses play an important role in this problem and are discussed in detail. Several simulations are given to demonstrate the methodology. In addition, a real data set from a melanoma cancer clinical trial is presented to illustrate the methods proposed.  相似文献   
225.
This article extends the standard regression discontinuity (RD) design to allow for sample selection or missing outcomes. We deal with both treatment endogeneity and sample selection. Identification in this article does not require any exclusion restrictions in the selection equation, nor does it require specifying any selection mechanism. The results can therefore be applied broadly, regardless of how sample selection is incurred. Identification instead relies on smoothness conditions. Smoothness conditions are empirically plausible, have readily testable implications, and are typically assumed even in the standard RD design. We first provide identification of the “extensive margin” and “intensive margin” effects. Then based on these identification results and principle stratification, sharp bounds are constructed for the treatment effects among the group of individuals that may be of particular policy interest, that is, those always participating compliers. These results are applied to evaluate the impacts of academic probation on college completion and final GPAs. Our analysis reveals striking gender differences at the extensive versus the intensive margin in response to this negative signal on performance.  相似文献   
226.
We apply some log-linear modelling methods, which have been proposed for treating non-ignorable non-response, to some data on voting intention from the British General Election Survey. We find that, although some non-ignorable non-response models fit the data very well, they may generate implausible point estimates and predictions. Some explanation is provided for the extreme behaviour of the maximum likelihood estimates for the most parsimonious model. We conclude that point estimates for such models must be treated with great caution. To allow for the uncertainty about the non-response mechanism we explore the use of profile likelihood inference and find the likelihood surfaces to be very flat and the interval estimates to be very wide. To reduce the width of these intervals we propose constraining confidence regions to values where the parameters governing the non-response mechanism are plausible and study the effect of such constraints on inference. We find that the widths of these intervals are reduced but remain wide.  相似文献   
227.
The aim of this study is to determine the effect of informative priors for variables with missing value and to compare Bayesian Cox regression and Cox regression analysis. For this purpose, firstly simulated data sets with different sample size within different missing rate were generated and each of data sets were analysed by Cox regression and Bayesian Cox regression with informative prior. Secondly lung cancer data set as real data set was used for analysis. Consequently, using informative priors for variables with missing value solved the missing data problem.  相似文献   
228.
229.
Young people who go missing face significant risks and vulnerabilities, yet there has been limited research looking their longer-term criminal justice-related outcomes. The aim of this study was to explore the criminal justice and mental health-related trajectories of a random sample of 215 young people reported missing for the first time in 2005, followed up for a decade. Two thirds (64.7%) of the sample had accumulated an offence history and 68.4% a victimisation history. More than a third were reported missing multiple times; these youth were characteristically different to single episode missing persons with respect to police contacts and mental health-related vulnerability. Results highlight a significant level of mental health concern among a population that police are not adequately equipped to respond to. Further research is needed to better understand motivations for going missing and the extent of risks and vulnerabilities they face while missing and upon return.  相似文献   
230.
There has been growing interest in partial identification of probability distributions and parameters. This paper considers statistical inference on parameters that are partially identified because data are incompletely observed, due to nonresponse or censoring, for instance. A method based on likelihood ratios is proposed for constructing confidence sets for partially identified parameters. The method can be used to estimate a proportion or a mean in the presence of missing data, without assuming missing-at-random or modeling the missing-data mechanism. It can also be used to estimate a survival probability with censored data without assuming independent censoring or modeling the censoring mechanism. A version of the verification bias problem is studied as well.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号