首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   11367篇
  免费   22篇
管理学   1576篇
民族学   133篇
人口学   2474篇
丛书文集   7篇
理论方法论   574篇
综合类   301篇
社会学   5076篇
统计学   1248篇
  2023年   4篇
  2021年   5篇
  2020年   22篇
  2019年   32篇
  2018年   1685篇
  2017年   1680篇
  2016年   1102篇
  2015年   55篇
  2014年   57篇
  2013年   243篇
  2012年   349篇
  2011年   1177篇
  2010年   1067篇
  2009年   805篇
  2008年   841篇
  2007年   1013篇
  2006年   31篇
  2005年   255篇
  2004年   278篇
  2003年   233篇
  2002年   106篇
  2001年   29篇
  2000年   21篇
  1999年   16篇
  1998年   12篇
  1997年   13篇
  1996年   38篇
  1995年   11篇
  1994年   16篇
  1993年   8篇
  1992年   16篇
  1991年   8篇
  1990年   8篇
  1989年   8篇
  1988年   20篇
  1987年   9篇
  1986年   10篇
  1985年   6篇
  1984年   9篇
  1983年   10篇
  1982年   9篇
  1981年   5篇
  1979年   6篇
  1978年   9篇
  1977年   8篇
  1976年   6篇
  1975年   4篇
  1974年   5篇
  1973年   7篇
  1965年   3篇
排序方式: 共有10000条查询结果,搜索用时 0 毫秒
991.
In biomedical studies where the event of interest is recurrent (e.g., hospitalization), it is often the case that the recurrent event sequence is subject to being stopped by a terminating event (e.g., death). In comparing treatment options, the marginal recurrent event mean is frequently of interest. One major complication in the recurrent/terminal event setting is that censoring times are not known for subjects observed to die, which renders standard risk set based methods of estimation inapplicable. We propose two semiparametric methods for estimating the difference or ratio of treatment-specific marginal mean numbers of events. The first method involves imputing unobserved censoring times, while the second methods uses inverse probability of censoring weighting. In each case, imbalances in the treatment-specific covariate distributions are adjusted out through inverse probability of treatment weighting. After the imputation and/or weighting, the treatment-specific means (then their difference or ratio) are estimated nonparametrically. Large-sample properties are derived for each of the proposed estimators, with finite sample properties assessed through simulation. The proposed methods are applied to kidney transplant data.  相似文献   
992.
One method of assessing the fit of an event history model is to plot the empirical standard deviation of standardised martingale residuals. We develop an alternative procedure which is valid also in the presence of measurement error and applicable to both longitudinal and recurrent event data. Since the covariance between martingale residuals at times t 0 and t > t 0 is independent of t, a plot of these covariances should, for fixed t 0, have no time trend. A test statistic is developed from the increments in the estimated covariances, and we investigate its properties under various types of model misspecification. Applications of the approach are presented using two Brazilian studies measuring daily prevalence and incidence of infant diarrhoea and a longitudinal study into treatment of schizophrenia.  相似文献   
993.
Conformal predictors, introduced by Vovk et al. (Algorithmic Learning in a Random World, Springer, New York, 2005), serve to build prediction intervals by exploiting a notion of conformity of the new data point with previously observed data. We propose a novel method for constructing prediction intervals for the response variable in multivariate linear models. The main emphasis is on sparse linear models, where only few of the covariates have significant influence on the response variable even if the total number of covariates is very large. Our approach is based on combining the principle of conformal prediction with the 1 penalized least squares estimator (LASSO). The resulting confidence set depends on a parameter ε>0 and has a coverage probability larger than or equal to 1−ε. The numerical experiments reported in the paper show that the length of the confidence set is small. Furthermore, as a by-product of the proposed approach, we provide a data-driven procedure for choosing the LASSO penalty. The selection power of the method is illustrated on simulated and real data.  相似文献   
994.
This paper considers settings where populations of units may experience recurrent events, termed failures for convenience, and where the units are subject to varying levels of usage. We provide joint models for the recurrent events and usage processes, which facilitate analysis of their relationship as well as prediction of failures. Data on usage are often incomplete and we show how to implement maximum likelihood estimation in such cases. Random effects models with linear usage processes and gamma usage processes are considered in some detail. Data on automobile warranty claims are used to illustrate the proposed models and estimation methodology.  相似文献   
995.
996.
Recently, the orthodox best linear unbiased predictor (BLUP) method was introduced for inference about random effects in Tweedie mixed models. With the use of h-likelihood, we illustrate that the standard likelihood procedures, developed for inference about fixed unknown parameters, can be used for inference about random effects. We show that the necessary standard error for the prediction interval of the random effect can be computed from the Hessian matrix of the h-likelihood. We also show numerically that the h-likelihood provides a prediction interval that maintains a more precise coverage probability than the BLUP method.  相似文献   
997.
In this paper, the task of determining expected values of sample moments, where the sample members have been selected based on noisy information, is considered. This task is a recurring problem in the theory of evolution strategies. Exact expressions for expected values of sums of products of concomitants of selected order statistics are derived. Then, using Edgeworth and Cornish-Fisher approximations, explicit results that depend on coefficients that can be determined numerically are obtained. While the results are exact only for normal populations, it is shown experimentally that including skewness and kurtosis in the calculations can yield greatly improved results for other distributions.  相似文献   
998.
Originally, the exponentially weighted moving average (EWMA) control chart was developed for detecting changes in the process mean. The average run length (ARL) became the most popular performance measure for schemes with this objective. When monitoring the mean of independent and normally distributed observations the ARL can be determined with high precision. Nowadays, EWMA control charts are also used for monitoring the variance. Charts based on the sample variance S2 are an appropriate choice. The usage of ARL evaluation techniques known from mean monitoring charts, however, is difficult. The most accurate method—solving a Fredholm integral equation with the Nyström method—fails due to an improper kernel in the case of chi-squared distributions. Here, we exploit the collocation method and the product Nyström method. These methods are compared to Markov chain based approaches. We see that collocation leads to higher accuracy than currently established methods.  相似文献   
999.
Using the data from the AIDS Link to Intravenous Experiences cohort study as an example, an informative censoring model was used to characterize the repeated hospitalization process of a group of patients. Under the informative censoring assumption, the estimators of the baseline rate function and the regression parameters were shown to be related to a latent variable. Hence, it becomes impractical to directly estimate the unknown quantities in the moments of the estimators for the bandwidth selection of a smoothing estimator and the construction of confidence intervals, which are respectively based on the asymptotic mean squared errors and the asymptotic distributions of the estimators. To overcome these difficulties, we develop a random weighted bootstrap procedure to select appropriate bandwidths and to construct approximated confidence intervals. One can see that our method is simple and faster to implement from a practical point of view, and is at least as accurate as other bootstrap methods. In this article, it is shown that the proposed method is useful through the performance of a Monte Carlo simulation. An application of our procedure is also illustrated by a recurrent event sample of intravenous drug users for inpatient cares over time.  相似文献   
1000.
Young adulthood is a period renowned for engagement in impulsive and risky behaviors, including gambling. There are some indications that young adults exhibit higher gambling rates in comparison to older adults. Problem gambling has also been linked to ADHD. This longitudinal study examines the relationship between gambling and ADHD among an epidemiological sample of young adults (n = 235; males = 179, females = 56) aged 18-24. Results indicate that individuals who report childhood ADHD symptoms which persist into young adulthood experience greater gambling problem severity than participants with no ADHD or those with non-persistent ADHD.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号