首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   21790篇
  免费   553篇
  国内免费   3篇
管理学   2872篇
民族学   113篇
人才学   6篇
人口学   1952篇
丛书文集   124篇
理论方法论   2047篇
综合类   326篇
社会学   10699篇
统计学   4207篇
  2021年   116篇
  2020年   320篇
  2019年   459篇
  2018年   574篇
  2017年   779篇
  2016年   529篇
  2015年   422篇
  2014年   521篇
  2013年   3844篇
  2012年   735篇
  2011年   641篇
  2010年   542篇
  2009年   457篇
  2008年   487篇
  2007年   515篇
  2006年   510篇
  2005年   497篇
  2004年   426篇
  2003年   377篇
  2002年   411篇
  2001年   565篇
  2000年   495篇
  1999年   457篇
  1998年   359篇
  1997年   310篇
  1996年   348篇
  1995年   314篇
  1994年   335篇
  1993年   310篇
  1992年   336篇
  1991年   349篇
  1990年   333篇
  1989年   306篇
  1988年   313篇
  1987年   309篇
  1986年   247篇
  1985年   317篇
  1984年   325篇
  1983年   286篇
  1982年   236篇
  1981年   178篇
  1980年   197篇
  1979年   227篇
  1978年   181篇
  1977年   160篇
  1976年   141篇
  1975年   156篇
  1974年   156篇
  1973年   127篇
  1972年   104篇
排序方式: 共有10000条查询结果,搜索用时 46 毫秒
81.
Summary.  When evaluating potential interventions for cancer prevention, it is necessary to compare benefits and harms. With new study designs, new statistical approaches may be needed to facilitate this comparison. A case in point arose in a proposed genetic substudy of a randomized trial of tamoxifen versus placebo in asymptomatic women who were at high risk for breast cancer. Although the randomized trial showed that tamoxifen substantially reduced the risk of breast cancer, the harms from tamoxifen were serious and some were life threaten-ing. In hopes of finding a subset of women with inherited risk genes who derive greater bene-fits from tamoxifen, we proposed a nested case–control study to test some trial subjects for various genes and new statistical methods to extrapolate benefits and harms to the general population. An important design question is whether or not the study should target common low penetrance genes. Our calculations show that useful results are only likely with rare high penetrance genes.  相似文献   
82.
Summary.  Factor analysis is a powerful tool to identify the common characteristics among a set of variables that are measured on a continuous scale. In the context of factor analysis for non-continuous-type data, most applications are restricted to item response data only. We extend the factor model to accommodate ranked data. The Monte Carlo expectation–maximization algorithm is used for parameter estimation at which the E-step is implemented via the Gibbs sampler. An analysis based on both complete and incomplete ranked data (e.g. rank the top q out of k items) is considered. Estimation of the factor scores is also discussed. The method proposed is applied to analyse a set of incomplete ranked data that were obtained from a survey that was carried out in GuangZhou, a major city in mainland China, to investigate the factors affecting people's attitude towards choosing jobs.  相似文献   
83.
Longitudinal data often contain missing observations, and it is in general difficult to justify particular missing data mechanisms, whether random or not, that may be hard to distinguish. The authors describe a likelihood‐based approach to estimating both the mean response and association parameters for longitudinal binary data with drop‐outs. They specify marginal and dependence structures as regression models which link the responses to the covariates. They illustrate their approach using a data set from the Waterloo Smoking Prevention Project They also report the results of simulation studies carried out to assess the performance of their technique under various circumstances.  相似文献   
84.
85.
To capture mean and variance asymmetries and time‐varying volatility in financial time series, we generalize the threshold stochastic volatility (THSV) model and incorporate a heavy‐tailed error distribution. Unlike existing stochastic volatility models, this model simultaneously accounts for uncertainty in the unobserved threshold value and in the time‐delay parameter. Self‐exciting and exogenous threshold variables are considered to investigate the impact of a number of market news variables on volatility changes. Adopting a Bayesian approach, we use Markov chain Monte Carlo methods to estimate all unknown parameters and latent variables. A simulation experiment demonstrates good estimation performance for reasonable sample sizes. In a study of two international financial market indices, we consider two variants of the generalized THSV model, with US market news as the threshold variable. Finally, we compare models using Bayesian forecasting in a value‐at‐risk (VaR) study. The results show that our proposed model can generate more accurate VaR forecasts than can standard models.  相似文献   
86.
In non-experimental research, data on the same population process may be collected simultaneously by more than one instrument. For example, in the present application, two sample surveys and a population birth registration system all collect observations on first births by age and year, while the two surveys additionally collect information on women’s education. To make maximum use of the three data sources, the survey data are pooled and the population data introduced as constraints in a logistic regression equation. Reductions in standard errors about the age and birth-cohort parameters of the regression equation in the order of three-quarters are obtained by introducing the population data as constraints. A halving of the standard errors about the education parameters is achieved by pooling observations from the larger survey dataset with those from the smaller survey. The percentage reduction in the standard errors through imposing population constraints is independent of the total survey sample size.  相似文献   
87.
Summary.  We estimate cause–effect relationships in empirical research where exposures are not completely controlled, as in observational studies or with patient non-compliance and self-selected treatment switches in randomized clinical trials. Additive and multiplicative structural mean models have proved useful for this but suffer from the classical limitations of linear and log-linear models when accommodating binary data. We propose the generalized structural mean model to overcome these limitations. This is a semiparametric two-stage model which extends the structural mean model to handle non-linear average exposure effects. The first-stage structural model describes the causal effect of received exposure by contrasting the means of observed and potential exposure-free outcomes in exposed subsets of the population. For identification of the structural parameters, a second stage 'nuisance' model is introduced. This takes the form of a classical association model for expected outcomes given observed exposure. Under the model, we derive estimating equations which yield consistent, asymptotically normal and efficient estimators of the structural effects. We examine their robustness to model misspecification and construct robust estimators in the absence of any exposure effect. The double-logistic structural mean model is developed in more detail to estimate the effect of observed exposure on the success of treatment in a randomized controlled blood pressure reduction trial with self-selected non-compliance.  相似文献   
88.
Using data from 8 random assignment studies and employing meta‐analytic techniques, this article provides systematic evidence that welfare and work policies targeted at low‐income parents have small adverse effects on some school outcomes among adolescents ages 12 to 18 years at follow‐up. These adverse effects were observed mostly for school performance outcomes and occurred in programs that required mothers to work or participate in employment‐related activities and those that encouraged mothers to work voluntarily. The most pronounced negative effects on school outcomes occurred for the group of adolescents who had a younger sibling, possibly because of the increased home and sibling care responsibilities they assumed as their mothers increased their employment.  相似文献   
89.
Boundary Spaces     
While shows like The X-Files and 24 have merged conspiracy theories with popular science (fictions), some video games have been pushing the narrative even further. Electronic Art's Majestic game was released in July 2001 and quickly generated media buzz with its unusual multi-modal gameplay. Mixing phone calls, faxes, instant messaging, real and "fake' websites, and email, the game provides a fascinating case of an attempt at new directions for gaming communities. Through story, mode of playing, and use of technology, Majestic highlights the uncertain status of knowledge, community and self in a digital age; at the same time, it allows examination of alternative ways of understanding games' role and purpose in the larger culture. Drawing on intricate storylines involving government conspiracies, techno-bio warfare, murder and global terror, players were asked to solve mysteries in the hopes of preventing a devastating future of domination. Because the game drew in both actual and Majestic-owned/-designed websites, it constantly pushed those playing the game right to borders where simulation collides with " factuality'. Given the wide variety of "legitimate' conspiracy theory, alien encounters and alternative science web pages, users often could not distinguish when they were leaving the game's pages and venturing into " real' World Wide Web sites. Its further use of AOL's instant messenger system, in which gamers spoke not only to bots but to other players, pushed users to evaluate constantly both the status of those they were talking to and the information being provided. Additionally, the game required players to occupy unfamiliar subject positions, ones where agency was attenuated, and which subsequently generated a multi-layered sense of unease among players. This mix of authentic and staged information in conjunction with technologically mediated roles highlights what are often seen as phenomenon endemic to the Internet itself; that is, the destabilization of categories of knowing, relating, and being.  相似文献   
90.
This paper proposes a developmental framework for foster parents and outlines four distinct growth stages. Such a framework can be of value to program administrators who are required to assess foster parent development during the crucial matching process. To draw a distinction between each developmental stage, specific instrumental tasks and indicators are outlined.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号