首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   12015篇
  免费   31篇
管理学   1776篇
民族学   106篇
人口学   2503篇
丛书文集   6篇
理论方法论   678篇
综合类   301篇
社会学   5228篇
统计学   1448篇
  2023年   12篇
  2022年   5篇
  2021年   10篇
  2020年   31篇
  2019年   54篇
  2018年   1690篇
  2017年   1699篇
  2016年   1106篇
  2015年   78篇
  2014年   90篇
  2013年   301篇
  2012年   385篇
  2011年   1195篇
  2010年   1088篇
  2009年   822篇
  2008年   870篇
  2007年   1042篇
  2006年   52篇
  2005年   274篇
  2004年   296篇
  2003年   254篇
  2002年   123篇
  2001年   45篇
  2000年   43篇
  1999年   42篇
  1998年   30篇
  1997年   31篇
  1996年   43篇
  1995年   22篇
  1994年   24篇
  1993年   11篇
  1992年   11篇
  1991年   17篇
  1990年   15篇
  1989年   7篇
  1988年   32篇
  1987年   15篇
  1986年   17篇
  1985年   14篇
  1984年   18篇
  1983年   15篇
  1982年   11篇
  1981年   21篇
  1980年   16篇
  1979年   13篇
  1978年   11篇
  1977年   6篇
  1976年   8篇
  1975年   9篇
  1974年   7篇
排序方式: 共有10000条查询结果,搜索用时 0 毫秒
991.
In this paper, we investigate empirical likelihood (EL) inference for density-weighted average derivatives in nonparametric multiple regression models. A simply adjusted empirical log-likelihood ratio for the vector of density-weighted average derivatives is defined and its limiting distribution is shown to be a standard Chi-square distribution. To increase the accuracy and coverage probability of confidence regions, an EL inference procedure for the rescaled parameter vector is proposed by using a linear instrumental variables regression. The new method shares the same properties of the regular EL method with i.i.d. samples. For example, estimation of limiting variances and covariances is not needed. A Monte Carlo simulation study is presented to compare the new method with the normal approximation method and an existing EL method.  相似文献   
992.
A quasi-Newton acceleration for high-dimensional optimization algorithms   总被引:1,自引:0,他引:1  
In many statistical problems, maximum likelihood estimation by an EM or MM algorithm suffers from excruciatingly slow convergence. This tendency limits the application of these algorithms to modern high-dimensional problems in data mining, genomics, and imaging. Unfortunately, most existing acceleration techniques are ill-suited to complicated models involving large numbers of parameters. The squared iterative methods (SQUAREM) recently proposed by Varadhan and Roland constitute one notable exception. This paper presents a new quasi-Newton acceleration scheme that requires only modest increments in computation per iteration and overall storage and rivals or surpasses the performance of SQUAREM on several representative test problems.  相似文献   
993.
994.
A Bayesian multi-category kernel classification method is proposed. The algorithm performs the classification of the projections of the data to the principal axes of the feature space. The advantage of this approach is that the regression coefficients are identifiable and sparse, leading to large computational savings and improved classification performance. The degree of sparsity is regulated in a novel framework based on Bayesian decision theory. The Gibbs sampler is implemented to find the posterior distributions of the parameters, thus probability distributions of prediction can be obtained for new data points, which gives a more complete picture of classification. The algorithm is aimed at high dimensional data sets where the dimension of measurements exceeds the number of observations. The applications considered in this paper are microarray, image processing and near-infrared spectroscopy data.  相似文献   
995.
There is a wide variety of stochastic ordering problems where K groups (typically ordered with respect to time) are observed along with a (continuous) response. The interest of the study may be on finding the change-point group, i.e. the group where an inversion of trend of the variable under study is observed. A change point is not merely a maximum (or a minimum) of the time-series function, but a further requirement is that the trend of the time-series is monotonically increasing before that point, and monotonically decreasing afterwards. A suitable solution can be provided within a conditional approach, i.e. by considering some suitable nonparametric combination of dependent tests for simple stochastic ordering problems. The proposed procedure is very flexible and can be extended to trend and/or repeated measure problems. Some comparisons through simulations and examples with the well known Mack & Wolfe test for umbrella alternative and with Page’s test for trend problems with correlated data are investigated.  相似文献   
996.
997.
Simple nonparametric estimates of the conditional distribution of a response variable given a covariate are often useful for data exploration purposes or to help with the specification or validation of a parametric or semi-parametric regression model. In this paper we propose such an estimator in the case where the response variable is interval-censored and the covariate is continuous. Our approach consists in adding weights that depend on the covariate value in the self-consistency equation proposed by Turnbull (J R Stat Soc Ser B 38:290–295, 1976), which results in an estimator that is no more difficult to implement than Turnbull’s estimator itself. We show the convergence of our algorithm and that our estimator reduces to the generalized Kaplan–Meier estimator (Beran, Nonparametric regression with randomly censored survival data, 1981) when the data are either complete or right-censored. We demonstrate by simulation that the estimator, bootstrap variance estimation and bandwidth selection (by rule of thumb or cross-validation) all perform well in finite samples. We illustrate the method by applying it to a dataset from a study on the incidence of HIV in a group of female sex workers from Kinshasa.  相似文献   
998.
This paper considers the problem of modeling migraine severity assessments and their dependence on weather and time characteristics. We take on the viewpoint of a patient who is interested in an individual migraine management strategy. Since factors influencing migraine can differ between patients in number and magnitude, we show how a patient’s headache calendar reporting the severity measurements on an ordinal scale can be used to determine the dominating factors for this special patient. One also has to account for dependencies among the measurements. For this the autoregressive ordinal probit (AOP) model of Müller and Czado (J Comput Graph Stat 14: 320–338, 2005) is utilized and fitted to a single patient’s migraine data by a grouped move multigrid Monte Carlo (GM-MGMC) Gibbs sampler. Initially, covariates are selected using proportional odds models. Model fit and model comparison are discussed. A comparison with proportional odds specifications shows that the AOP models are preferred.  相似文献   
999.
Approximate Bayesian inference on the basis of summary statistics is well-suited to complex problems for which the likelihood is either mathematically or computationally intractable. However the methods that use rejection suffer from the curse of dimensionality when the number of summary statistics is increased. Here we propose a machine-learning approach to the estimation of the posterior density by introducing two innovations. The new method fits a nonlinear conditional heteroscedastic regression of the parameter on the summary statistics, and then adaptively improves estimation using importance sampling. The new algorithm is compared to the state-of-the-art approximate Bayesian methods, and achieves considerable reduction of the computational burden in two examples of inference in statistical genetics and in a queueing model.  相似文献   
1000.
In biomedical studies where the event of interest is recurrent (e.g., hospitalization), it is often the case that the recurrent event sequence is subject to being stopped by a terminating event (e.g., death). In comparing treatment options, the marginal recurrent event mean is frequently of interest. One major complication in the recurrent/terminal event setting is that censoring times are not known for subjects observed to die, which renders standard risk set based methods of estimation inapplicable. We propose two semiparametric methods for estimating the difference or ratio of treatment-specific marginal mean numbers of events. The first method involves imputing unobserved censoring times, while the second methods uses inverse probability of censoring weighting. In each case, imbalances in the treatment-specific covariate distributions are adjusted out through inverse probability of treatment weighting. After the imputation and/or weighting, the treatment-specific means (then their difference or ratio) are estimated nonparametrically. Large-sample properties are derived for each of the proposed estimators, with finite sample properties assessed through simulation. The proposed methods are applied to kidney transplant data.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号