首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   17873篇
  免费   448篇
  国内免费   2篇
管理学   2591篇
民族学   90篇
人才学   3篇
人口学   1640篇
丛书文集   94篇
理论方法论   1625篇
综合类   317篇
社会学   8533篇
统计学   3430篇
  2020年   257篇
  2019年   387篇
  2018年   461篇
  2017年   619篇
  2016年   440篇
  2015年   324篇
  2014年   430篇
  2013年   2956篇
  2012年   588篇
  2011年   561篇
  2010年   438篇
  2009年   417篇
  2008年   381篇
  2007年   462篇
  2006年   410篇
  2005年   414篇
  2004年   383篇
  2003年   309篇
  2002年   354篇
  2001年   456篇
  2000年   406篇
  1999年   366篇
  1998年   287篇
  1997年   273篇
  1996年   262篇
  1995年   240篇
  1994年   248篇
  1993年   239篇
  1992年   274篇
  1991年   258篇
  1990年   254篇
  1989年   242篇
  1988年   216篇
  1987年   227篇
  1986年   206篇
  1985年   246篇
  1984年   257篇
  1983年   251篇
  1982年   210篇
  1981年   166篇
  1980年   190篇
  1979年   197篇
  1978年   184篇
  1977年   154篇
  1976年   148篇
  1975年   161篇
  1974年   140篇
  1973年   117篇
  1972年   94篇
  1971年   95篇
排序方式: 共有10000条查询结果,搜索用时 437 毫秒
71.
Summary.  When evaluating potential interventions for cancer prevention, it is necessary to compare benefits and harms. With new study designs, new statistical approaches may be needed to facilitate this comparison. A case in point arose in a proposed genetic substudy of a randomized trial of tamoxifen versus placebo in asymptomatic women who were at high risk for breast cancer. Although the randomized trial showed that tamoxifen substantially reduced the risk of breast cancer, the harms from tamoxifen were serious and some were life threaten-ing. In hopes of finding a subset of women with inherited risk genes who derive greater bene-fits from tamoxifen, we proposed a nested case–control study to test some trial subjects for various genes and new statistical methods to extrapolate benefits and harms to the general population. An important design question is whether or not the study should target common low penetrance genes. Our calculations show that useful results are only likely with rare high penetrance genes.  相似文献   
72.
Longitudinal data often contain missing observations, and it is in general difficult to justify particular missing data mechanisms, whether random or not, that may be hard to distinguish. The authors describe a likelihood‐based approach to estimating both the mean response and association parameters for longitudinal binary data with drop‐outs. They specify marginal and dependence structures as regression models which link the responses to the covariates. They illustrate their approach using a data set from the Waterloo Smoking Prevention Project They also report the results of simulation studies carried out to assess the performance of their technique under various circumstances.  相似文献   
73.
74.
Summary.  We estimate cause–effect relationships in empirical research where exposures are not completely controlled, as in observational studies or with patient non-compliance and self-selected treatment switches in randomized clinical trials. Additive and multiplicative structural mean models have proved useful for this but suffer from the classical limitations of linear and log-linear models when accommodating binary data. We propose the generalized structural mean model to overcome these limitations. This is a semiparametric two-stage model which extends the structural mean model to handle non-linear average exposure effects. The first-stage structural model describes the causal effect of received exposure by contrasting the means of observed and potential exposure-free outcomes in exposed subsets of the population. For identification of the structural parameters, a second stage 'nuisance' model is introduced. This takes the form of a classical association model for expected outcomes given observed exposure. Under the model, we derive estimating equations which yield consistent, asymptotically normal and efficient estimators of the structural effects. We examine their robustness to model misspecification and construct robust estimators in the absence of any exposure effect. The double-logistic structural mean model is developed in more detail to estimate the effect of observed exposure on the success of treatment in a randomized controlled blood pressure reduction trial with self-selected non-compliance.  相似文献   
75.
Boundary Spaces     
While shows like The X-Files and 24 have merged conspiracy theories with popular science (fictions), some video games have been pushing the narrative even further. Electronic Art's Majestic game was released in July 2001 and quickly generated media buzz with its unusual multi-modal gameplay. Mixing phone calls, faxes, instant messaging, real and "fake' websites, and email, the game provides a fascinating case of an attempt at new directions for gaming communities. Through story, mode of playing, and use of technology, Majestic highlights the uncertain status of knowledge, community and self in a digital age; at the same time, it allows examination of alternative ways of understanding games' role and purpose in the larger culture. Drawing on intricate storylines involving government conspiracies, techno-bio warfare, murder and global terror, players were asked to solve mysteries in the hopes of preventing a devastating future of domination. Because the game drew in both actual and Majestic-owned/-designed websites, it constantly pushed those playing the game right to borders where simulation collides with " factuality'. Given the wide variety of "legitimate' conspiracy theory, alien encounters and alternative science web pages, users often could not distinguish when they were leaving the game's pages and venturing into " real' World Wide Web sites. Its further use of AOL's instant messenger system, in which gamers spoke not only to bots but to other players, pushed users to evaluate constantly both the status of those they were talking to and the information being provided. Additionally, the game required players to occupy unfamiliar subject positions, ones where agency was attenuated, and which subsequently generated a multi-layered sense of unease among players. This mix of authentic and staged information in conjunction with technologically mediated roles highlights what are often seen as phenomenon endemic to the Internet itself; that is, the destabilization of categories of knowing, relating, and being.  相似文献   
76.
77.
Determining the size and demographiccharacteristics of substance abuse populationsis extremely important for implementing publicpolicies aimed at the control of substanceabuse. Such information not only assists in theallocation of limited treatment resources bythe state, but also in the monitoring ofsubstance abuse trends over time and in theevaluation of innovative policy initiatives. Inthis study, we develop three composite measuresof treatment need. We then use these measuresto estimate treatment need for alcohol abuseand for controlled substance abuse within eachof Florida's 67 counties. This study providesan important empirical component of communityplanning, quantifying and, to a limited degree,specifying the level of need for the substanceabuse treatment of community residents. Anadditional benefit is the development of a costeffective and unobtrusive methodology fordetermining empirically when levels of need arechanging so that treatment levels can beadjusted accordingly. With proper use,policymakers can readily employ the methodologydeveloped in this study in Florida andelsewhere to make better-informed decisions inthe allocation of finite substance abusetreatment resources.  相似文献   
78.
79.
80.
In the development of many diseases there are often associated random variables which continuously reflect the progress of a subject towards the final expression of the disease (failure). At any given time these processes, which we call stochastic covariates, may provide information about the current hazard and the remaining time to failure. Likewise, in situations when the specific times of key prior events are not known, such as the time of onset of an occult tumour or the time of infection with HIV-1, it may be possible to identify a stochastic covariate which reveals, indirectly, when the event of interest occurred. The analysis of carcinogenicity trials which involve occult tumours is usually based on the time of death or sacrifice and an indicator of tumour presence for each animal in the experiment. However, the size of an occult tumour observed at the endpoint represents data concerning tumour development which may convey additional information concerning both the tumour incidence rate and the rate of death to which tumour-bearing animals are subject. We develop a stochastic model for tumour growth and suggest different ways in which the effect of this growth on the hazard of failure might be modelled. Using a combined model for tumour growth and additive competing risks of death, we show that if this tumour size information is used, assumptions concerning tumour lethality, the context of observation or multiple sacrifice times are no longer necessary in order to estimate the tumour incidence rate. Parametric estimation based on the method of maximum likelihood is outlined and is applied to simulated data from the combined model. The results of this limited study confirm that use of the stochastic covariate tumour size results in more precise estimation of the incidence rate for occult tumours.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号