首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   1571篇
  免费   31篇
  国内免费   2篇
管理学   97篇
人才学   1篇
人口学   20篇
丛书文集   10篇
理论方法论   16篇
综合类   105篇
社会学   24篇
统计学   1331篇
  2024年   1篇
  2023年   11篇
  2022年   10篇
  2021年   13篇
  2020年   21篇
  2019年   66篇
  2018年   52篇
  2017年   105篇
  2016年   50篇
  2015年   35篇
  2014年   64篇
  2013年   487篇
  2012年   107篇
  2011年   42篇
  2010年   44篇
  2009年   40篇
  2008年   43篇
  2007年   44篇
  2006年   31篇
  2005年   43篇
  2004年   36篇
  2003年   22篇
  2002年   42篇
  2001年   19篇
  2000年   27篇
  1999年   20篇
  1998年   20篇
  1997年   22篇
  1996年   14篇
  1995年   5篇
  1994年   8篇
  1993年   5篇
  1992年   7篇
  1991年   5篇
  1990年   4篇
  1989年   3篇
  1988年   5篇
  1987年   4篇
  1986年   6篇
  1985年   5篇
  1984年   2篇
  1983年   2篇
  1981年   1篇
  1980年   4篇
  1979年   4篇
  1976年   1篇
  1975年   2篇
排序方式: 共有1604条查询结果,搜索用时 15 毫秒
131.
Summary.  A two-level regression mixture model is discussed and contrasted with the conventional two-level regression model. Simulated and real data shed light on the modelling alternatives. The real data analyses investigate gender differences in mathematics achievement from the US National Education Longitudinal Survey. The two-level regression mixture analyses show that unobserved heterogeneity should not be presupposed to exist only at level 2 at the expense of level 1. Both the simulated and the real data analyses show that level 1 heterogeneity in the form of latent classes can be mistaken for level 2 heterogeneity in the form of the random effects that are used in conventional two-level regression analysis. Because of this, mixture models have an important role to play in multilevel regression analyses. Mixture models allow heterogeneity to be investigated more fully, more correctly attributing different portions of the heterogeneity to the different levels.  相似文献   
132.
This paper explores the utility of different approaches for modeling longitudinal count data with dropouts arising from a clinical study for the treatment of actinic keratosis lesions on the face and balding scalp. A feature of these data is that as the disease for subjects on the active arm improves their data show larger dispersion compared with those on the vehicle, exhibiting an over‐dispersion relative to the Poisson distribution. After fitting the marginal (or population averaged) model using the generalized estimating equation (GEE), we note that inferences from such a model might be biased as dropouts are treatment related. Then, we consider using a weighted GEE (WGEE) where each subject's contribution to the analysis is weighted inversely by the subject's probability of dropout. Based on the model findings, we argue that the WGEE might not address the concerns about the impact of dropouts on the efficacy findings when dropouts are treatment related. As an alternative, we consider likelihood‐based inference where random effects are added to the model to allow for heterogeneity across subjects. Finally, we consider a transition model where, unlike the previous approaches that model the log‐link function of the mean response, we model the subject's actual lesion counts. This model is an extension of the Poisson autoregressive model of order 1, where the autoregressive parameter is taken to be a function of treatment as well as other covariates to induce different dispersions and correlations for the two treatment arms. We conclude with a discussion about model selection. Published in 2009 by John Wiley & Sons, Ltd.  相似文献   
133.
Since Durbin (1954) and Sargan (1958), instrumental variable (IV) method has long been one of the most popular procedures among economists and other social scientists to handle linear models with errors-in-variables. A direct application of this method to nonlinear errors-in-variables models, however, fails to yield consistent estimators.

This article restricts attention to Tobit and Probit models and shows that simple recentering and rescaling of the observed dependent variable may restore consistency of the standard IV estimator if the true dependent variable and the IV's are jointly normally distributed. Although the required condition seems rarely to be satisfied by real data, our Monte Carlo experiment suggests that the proposed estimator may be quite robust to the possible deviation from normality.  相似文献   
134.
The paper introduces a two-pass adaptive cumulative sum (CUSUM) statistic to identify age clusters (age grouping) that significantly contribute to epidemics or unusually high counts. If epidemiologists know that an epidemic is confined to a narrow age group, then this information not only makes it clear where to target the epidemiological effort but also helps them decide whether to respond. It is much easier to control an epidemic that starts in a narrow age range of the population, such as pre-school children, than an epidemic that is not confined demographically or geographically.  相似文献   
135.
Using a direct resampling process, a Bayesian approach is developed for the analysis of the shiftpoint problem. In many problems it is straight forward to isolate the marginal posterior distribution of the shift-point parameter and the conditional distribution of some of the parameters given the shift point and the other remaining parameters. When this is possible, a direct sampling approach is easily implemented whereby standard random number generators can be used to generate samples from the joint posterior distribution of aii the parameters in the model. This technique is illustrated with examples involving one shift for Poisson processes and regression models.  相似文献   
136.
In this article optimality of experimental design for fitting a lower-order polynomial to a higher order response function for the situation in which observations may be subject to shift in means as well as in variances is considered. It is found that Karson, Manson and Hader‘s (1969) optimum designs provide pro-tection, in some sense, against model inadequacies even when observations are subject to shift in means and variances.  相似文献   
137.
ABSTRACT

Scale equivariant estimators of the common variance σ2, of correlated normal random variables, have mean squared errors (MSE) which depend on the unknown correlations. For this reason, a scale equivariant estimator of σ2 which uniformly minimizes the MSE does not exist. For the equi-correlated case, we have developed three equivariant estimators of σ2: a Bayesian estimator for invariant prior as well as two non-Bayesian estimators. We then generalized these three estimators for the case of several variables with multiple unknown correlations. In addition, we developed a system of confidence intervals which produce the desired coverage probability while being efficient in terms of expected length.  相似文献   
138.
The individuality of n fingerprint is based on the configuration of occurences of the ten Galton characteristics ( ridge endings, forks, etc. ). A model ( Osterburg, Parthasarthy, Raghavan, Sclove, 1977 ) for the occurence of these characteristics, in terms of a grid of cells, is further developed. The occurence of the characteristics is modelled as a two-dimensional multivariate Poisson process. This approach allows one to treat multiple occurrences in a more satisfying way than in Osterburg, Parthasarathy, Raghavan and Sclove ( 1977 ) or Sclove ( 1978 )  相似文献   
139.
A new method for constructing interpretable principal components is proposed. The method first clusters the variables, and then interpretable (sparse) components are constructed from the correlation matrices of the clustered variables. For the first step of the method, a new weighted-variances method for clustering variables is proposed. It reflects the nature of the problem that the interpretable components should maximize the explained variance and thus provide sparse dimension reduction. An important feature of the new clustering procedure is that the optimal number of clusters (and components) can be determined in a non-subjective manner. The new method is illustrated using well-known simulated and real data sets. It clearly outperforms many existing methods for sparse principal component analysis in terms of both explained variance and sparseness.  相似文献   
140.
Poisson sampling is a method for unequal probabilities sampling with random sample size. There exist several implementations of the Poisson sampling design, with fixed sample size, which almost all are rejective methods, that is, the sample is not always accepted. Thus, the existing methods can be time-consuming or even infeasible in some situations. In this paper, a fast and non-rejective method, which is efficient even for large populations, is proposed and studied. The method is a new design for selecting a sample of fixed size with unequal inclusion probabilities. For the population of large size, the proposed design is very close to the strict πps sampling which is similar to the conditional Poisson (CP) sampling design, but the implementation of the design is much more efficient than the CP sampling. And the inclusion probabilities can be calculated recursively.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号