首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   1864篇
  免费   12篇
  国内免费   19篇
管理学   296篇
民族学   13篇
人口学   370篇
丛书文集   12篇
理论方法论   84篇
综合类   179篇
社会学   711篇
统计学   230篇
  2023年   4篇
  2022年   1篇
  2020年   2篇
  2019年   8篇
  2018年   27篇
  2017年   50篇
  2016年   95篇
  2015年   25篇
  2014年   14篇
  2013年   47篇
  2012年   169篇
  2011年   134篇
  2010年   28篇
  2009年   15篇
  2008年   23篇
  2007年   18篇
  2006年   18篇
  2005年   566篇
  2004年   284篇
  2003年   152篇
  2002年   40篇
  2001年   79篇
  2000年   35篇
  1999年   2篇
  1998年   7篇
  1997年   1篇
  1996年   2篇
  1992年   4篇
  1991年   9篇
  1990年   2篇
  1989年   3篇
  1988年   2篇
  1987年   1篇
  1986年   1篇
  1985年   4篇
  1984年   2篇
  1982年   1篇
  1981年   1篇
  1980年   1篇
  1979年   2篇
  1978年   2篇
  1977年   1篇
  1976年   1篇
  1975年   2篇
  1973年   1篇
  1972年   2篇
  1971年   1篇
  1968年   3篇
  1966年   1篇
  1964年   1篇
排序方式: 共有1895条查询结果,搜索用时 515 毫秒
251.
Principal components are a well established tool in dimension reduction. The extension to principal curves allows for general smooth curves which pass through the middle of a multidimensional data cloud. In this paper local principal curves are introduced, which are based on the localization of principal component analysis. The proposed algorithm is able to identify closed curves as well as multiple curves which may or may not be connected. For the evaluation of the performance of principal curves as tool for data reduction a measure of coverage is suggested. By use of simulated and real data sets the approach is compared to various alternative concepts of principal curves.  相似文献   
252.
We consider the calculation of power functions in classical multivariate analysis. In this context, power can be expressed in terms of tail probabilities of certain noncentral distributions. The necessary noncentral distribution theory was developed between the 1940s and 1970s by a number of authors. However, tractable methods for calculating the relevant probabilities have been lacking. In this paper we present simple yet extremely accurate saddlepoint approximations to power functions associated with the following classical test statistics: the likelihood ratio statistic for testing the general linear hypothesis in MANOVA; the likelihood ratio statistic for testing block independence; and Bartlett's modified likelihood ratio statistic for testing equality of covariance matrices.  相似文献   
253.
In this paper, we introduce non-centered and partially non-centered MCMC algorithms for stochastic epidemic models. Centered algorithms previously considered in the literature perform adequately well for small data sets. However, due to the high dependence inherent in the models between the missing data and the parameters, the performance of the centered algorithms gets appreciably worse when larger data sets are considered. Therefore non-centered and partially non-centered algorithms are introduced and are shown to out perform the existing centered algorithms.  相似文献   
254.
The performance of computationally inexpensive model selection criteria in the context of tree-structured subgroup analysis is investigated. It is shown through simulation that no single model selection criterion exhibits a uniformly superior performance over a wide range of scenarios. Therefore, a two-stage approach for model selection is proposed and shown to perform satisfactorily. Applied example of subgroup analysis is presented. Problems associated with tree-structured subgroup analysis are discussed and practical solutions are suggested.  相似文献   
255.
The banks have been accumulating huge data bases for many years and are increasingly turning to statistics to provide insight into customer behaviour, among other things. Credit risk is an important issue and certain stochastic models have been developed in recent years to describe and predict loan default. Two of the major models currently used in the industry are considered here, and various ways of extending their application to the case where a loan is repaid in installments are explored. The aspect of interest is the probability distribution of the total loss due to repayment default at some time. Thus, the loss distribution is determined by the distribution of times to default, here regarded as a discrete-time survival distribution. In particular, the probabilities of large losses are to be assessed for insurance purposes.  相似文献   
256.
In the presence of covariate information, the proportional hazards model is one of the most popular models. In this paper, in a Bayesian nonparametric framework, we use a Markov (Lévy-driven) process to model the baseline hazard rate. Previous Bayesian nonparametric models have been based on neutral to the right processes, which have a number of drawbacks, such as discreteness of the cumulative hazard function. We allow the covariates to be time dependent functions and develop a full posterior analysis via substitution sampling. A detailed illustration is presented.  相似文献   
257.
An auxiliary variable method based on a slice sampler is shown to provide an attractive simulation-based model fitting strategy for fitting Bayesian models under proper priors. Though broadly applicable, we illustrate in the context of fitting spatial models for geo-referenced or point source data. Spatial modeling within a Bayesian framework offers inferential advantages and the slice sampler provides an algorithm which is essentially off the shelf. Further potential advantages over importance sampling approaches and Metropolis approaches are noted and illustrative examples are supplied.  相似文献   
258.
In biomedical studies, interest often focuses on the relationship between patients characteristics or some risk factors and both quality of life and survival time of subjects under study. In this paper, we propose a simultaneous modelling of both quality of life and survival time using the observed covariates. Moreover, random effects are introduced into the simultaneous models to account for dependence between quality of life and survival time due to unobserved factors. EM algorithms are used to derive the point estimates for the parameters in the proposed model and profile likelihood function is used to estimate their variances. The asymptotic properties are established for our proposed estimators. Finally, simulation studies are conducted to examine the finite-sample properties of the proposed estimators and a liver transplantation data set is analyzed to illustrate our approaches.  相似文献   
259.
In this paper register based family studies provide the motivation for studying a two-stage estimation procedure in copula models for multivariate failure time data. The asymptotic properties of the estimators in both parametric and semi-parametric models are derived, generalising the approach by Shih and Louis (Biometrics vol. 51, pp. 1384–1399, 1995b) and Glidden (Lifetime Data Analysis vol. 6, pp. 141–156, 2000). Because register based family studies often involve very large cohorts a method for analysing a sampled cohort is also derived together with the asymptotic properties of the estimators. The proposed methods are studied in simulations and the estimators are found to be highly efficient. Finally, the methods are applied to a study of mortality in twins.  相似文献   
260.
Biplane projection imaging is one of the primary methods for imaging and visualizing the cardiovascular system in medicine. A key problem in such a technique is to determine the imaging geometry (i.e., the relative rotation and translation) of two projections so that the 3-D structure can be accurately reconstructed. Based on interesting observations and efficient geometric techniques, we present in this paper new algorithmic solutions for this problem. Comparing with existing optimization-based approaches, our techniques yield better accuracy and have bounded execution time, thus is more suitable for on-line applications. Our techniques can easily detect outliers to further improve the accuracy.This research was supported in part by NIH under USPHS grant numbers HL52567.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号