首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   279篇
  免费   3篇
管理学   3篇
丛书文集   3篇
理论方法论   2篇
综合类   8篇
社会学   19篇
统计学   247篇
  2021年   1篇
  2020年   4篇
  2019年   8篇
  2018年   14篇
  2017年   26篇
  2016年   6篇
  2015年   6篇
  2014年   11篇
  2013年   81篇
  2012年   15篇
  2011年   9篇
  2010年   9篇
  2009年   10篇
  2008年   7篇
  2007年   9篇
  2006年   13篇
  2005年   8篇
  2004年   5篇
  2003年   2篇
  2002年   7篇
  2001年   5篇
  2000年   5篇
  1999年   7篇
  1998年   7篇
  1997年   3篇
  1995年   1篇
  1993年   1篇
  1984年   1篇
  1979年   1篇
排序方式: 共有282条查询结果,搜索用时 187 毫秒
151.
Three-mode analysis is a generalization of principal component analysis to three-mode data. While two-mode data consist of cases that are measured on several variables, three-mode data consist of cases that are measured on several variables at several occasions. As any other statistical technique, the results of three-mode analysis may be influenced by missing data. Three-mode software packages generally use the expectation–maximization (EM) algorithm for dealing with missing data. However, there are situations in which the EM algorithm is expected to break down. Alternatively, multiple imputation may be used for dealing with missing data. In this study we investigated the influence of eight different multiple-imputation methods on the results of three-mode analysis, more specifically, a Tucker2 analysis, and compared the results with those of the EM algorithm. Results of the simulations show that multilevel imputation with the mode with the most levels nested within cases and the mode with the least levels represented as variables gives the best results for a Tucker2 analysis. Thus, this may be a good alternative for the EM algorithm in handling missing data in a Tucker2 analysis.  相似文献   
152.
Multiple imputation is a common approach for dealing with missing values in statistical databases. The imputer fills in missing values with draws from predictive models estimated from the observed data, resulting in multiple, completed versions of the database. Researchers have developed a variety of default routines to implement multiple imputation; however, there has been limited research comparing the performance of these methods, particularly for categorical data. We use simulation studies to compare repeated sampling properties of three default multiple imputation methods for categorical data, including chained equations using generalized linear models, chained equations using classification and regression trees, and a fully Bayesian joint distribution based on Dirichlet process mixture models. We base the simulations on categorical data from the American Community Survey. In the circumstances of this study, the results suggest that default chained equations approaches based on generalized linear models are dominated by the default regression tree and Bayesian mixture model approaches. They also suggest competing advantages for the regression tree and Bayesian mixture model approaches, making both reasonable default engines for multiple imputation of categorical data. Supplementary material for this article is available online.  相似文献   
153.
We consider estimation of a missing value for a stationary autoregressive process of order one with exponential innovations and compare two methods of estimation of the missing value, with respect to Pitman's measure of closeness (PMC).  相似文献   
154.
When analyzing data with missing data, a commonly used method is the inverse probability weighting (IPW) method, which reweights estimating equations with propensity scores. The popularity of the IPW method is due to its simplicity. However, it is often being criticized for being inefficient because most of the information from the incomplete observations is not used. Alternatively, the regression method is known to be efficient but is nonrobust to the misspecification of the regression function. In this article, we propose a novel way of optimally combining the propensity score function and the regression model. The resulting estimating equation enjoys the properties of robustness against misspecification of either the propensity score or the regression function, as well as being locally semiparametric efficient. We demonstrate analytically situations where our method leads to a more efficient estimator than some of its competitors. In a simulation study, we show the new method compares favorably with its competitors in finite samples. Supplementary materials for this article are available online.  相似文献   
155.
In this paper, we develop Bayesian methodology and computational algorithms for variable subset selection in Cox proportional hazards models with missing covariate data. A new joint semi-conjugate prior for the piecewise exponential model is proposed in the presence of missing covariates and its properties are examined. The covariates are assumed to be missing at random (MAR). Under this new prior, a version of the Deviance Information Criterion (DIC) is proposed for Bayesian variable subset selection in the presence of missing covariates. Monte Carlo methods are developed for computing the DICs for all possible subset models in the model space. A Bone Marrow Transplant (BMT) dataset is used to illustrate the proposed methodology.  相似文献   
156.
Classical inferential procedures induce conclusions from a set of data to a population of interest, accounting for the imprecision resulting from the stochastic component of the model. Less attention is devoted to the uncertainty arising from (unplanned) incompleteness in the data. Through the choice of an identifiable model for non-ignorable non-response, one narrows the possible data-generating mechanisms to the point where inference only suffers from imprecision. Some proposals have been made for assessing the sensitivity to these modelling assumptions; many are based on fitting several plausible but competing models. For example, we could assume that the missing data are missing at random in one model, and then fit an additional model where non-random missingness is assumed. On the basis of data from a Slovenian plebiscite, conducted in 1991, to prepare for independence, it is shown that such an ad hoc procedure may be misleading. We propose an approach which identifies and incorporates both sources of uncertainty in inference: imprecision due to finite sampling and ignorance due to incompleteness. A simple sensitivity analysis considers a finite set of plausible models. We take this idea one step further by considering more degrees of freedom than the data support. This produces sets of estimates (regions of ignorance) and sets of confidence regions (combined into regions of uncertainty).  相似文献   
157.
The use of surrogate end points has become increasingly common in medical and biological research. This is primarily because, in many studies, the primary end point of interest is too expensive or too difficult to obtain. There is now a large volume of statistical methods for analysing studies with surrogate end point data. However, to our knowledge, there has not been a comprehensive review of these methods to date. This paper reviews some existing methods and summarizes the strengths and weaknesses of each method. It also discusses the assumptions that are made by each method and critiques how likely these assumptions are met in practice.  相似文献   
158.
Estimating equations which are not necessarily likelihood-based score equations are becoming increasingly popular for estimating regression model parameters. This paper is concerned with estimation based on general estimating equations when true covariate data are missing for all the study subjects, but surrogate or mismeasured covariates are available instead. The method is motivated by the covariate measurement error problem in marginal or partly conditional regression of longitudinal data. We propose to base estimation on the expectation of the complete data estimating equation conditioned on available data. The regression parameters and other nuisance parameters are estimated simultaneously by solving the resulting estimating equations. The expected estimating equation (EEE) estimator is equal to the maximum likelihood estimator if the complete data scores are likelihood scores and conditioning is with respect to all the available data. A pseudo-EEE estimator, which requires less computation, is also investigated. Asymptotic distribution theory is derived. Small sample simulations are conducted when the error process is an order 1 autoregressive model. Regression calibration is extended to this setting and compared with the EEE approach. We demonstrate the methods on data from a longitudinal study of the relationship between childhood growth and adult obesity.  相似文献   
159.
The competing risks model is useful in settings in which individuals/units may die/fail for different reasons. The cause specific hazard rates are taken to be piecewise constant functions. A complication arises when some of the failures are masked within a group of possible causes. Traditionally, statistical inference is performed under the assumption that the failure causes act independently on each item. In this paper we propose an EM-based approach which allows for dependent competing risks and produces estimators for the sub-distribution functions. We also discuss identifiability of parameters if none of the masked items have their cause of failure clarified in a second stage analysis (e.g. autopsy). The procedures proposed are illustrated with two datasets.  相似文献   
160.
In this paper, we examine a method for analyzing competing risks data where the failure type of interest is missing or incomplete, but where there is an intermediate event, and only patients who experience the intermediate event can die of the cause of interest. In some applications, a method called “log-rank subtraction” has been applied to these problems. There has been no systematic study of this methodology, though. We investigate the statistical properties of the method and further propose a modified method by including a weight function in the construction of the test statistic to correct for potential biases. A class of tests is then proposed for comparing the disease-specific mortality in the two groups. The tests are based on comparing the difference of weighted log-rank scores for the failure type of interest. We derive the asymptotic properties for the modified test procedure. Simulation studies indicate that the tests are unbiased and have reasonable power. The results are also illustrated with data from a breast cancer study.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号