首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   11657篇
  免费   342篇
  国内免费   2篇
管理学   1726篇
民族学   73篇
人才学   3篇
人口学   1087篇
丛书文集   75篇
理论方法论   1175篇
综合类   240篇
社会学   5918篇
统计学   1704篇
  2021年   64篇
  2020年   185篇
  2019年   280篇
  2018年   294篇
  2017年   422篇
  2016年   304篇
  2015年   233篇
  2014年   308篇
  2013年   1894篇
  2012年   399篇
  2011年   377篇
  2010年   312篇
  2009年   301篇
  2008年   285篇
  2007年   331篇
  2006年   290篇
  2005年   287篇
  2004年   267篇
  2003年   201篇
  2002年   241篇
  2001年   297篇
  2000年   243篇
  1999年   226篇
  1998年   184篇
  1997年   164篇
  1996年   169篇
  1995年   158篇
  1994年   155篇
  1993年   159篇
  1992年   162篇
  1991年   155篇
  1990年   155篇
  1989年   152篇
  1988年   138篇
  1987年   153篇
  1986年   124篇
  1985年   150篇
  1984年   164篇
  1983年   151篇
  1982年   129篇
  1981年   93篇
  1980年   115篇
  1979年   114篇
  1978年   105篇
  1977年   89篇
  1976年   90篇
  1975年   100篇
  1974年   96篇
  1973年   70篇
  1972年   61篇
排序方式: 共有10000条查询结果,搜索用时 15 毫秒
51.
The well-known chi-squared goodness-of-fit test for a multinomial distribution is generally biased when the observations are subject to misclassification. In Pardo and Zografos (2000) the problem was considered using a double sampling scheme and ø-divergence test statistics. A new problem appears if the null hypothesis is not simple because it is necessary to give estimators for the unknown parameters. In this paper the minimum ø-divergence estimators are considered and some of their properties are established. The proposed ø-divergence test statistics are obtained by calculating ø-divergences between probability density functions and by replacing parameters by their minimum ø-divergence estimators in the derived expressions. Asymptotic distributions of the new test statistics are also obtained. The testing procedure is illustrated with an example.  相似文献   
52.
Longitudinal data often contain missing observations, and it is in general difficult to justify particular missing data mechanisms, whether random or not, that may be hard to distinguish. The authors describe a likelihood‐based approach to estimating both the mean response and association parameters for longitudinal binary data with drop‐outs. They specify marginal and dependence structures as regression models which link the responses to the covariates. They illustrate their approach using a data set from the Waterloo Smoking Prevention Project They also report the results of simulation studies carried out to assess the performance of their technique under various circumstances.  相似文献   
53.
54.
Summary.  We estimate cause–effect relationships in empirical research where exposures are not completely controlled, as in observational studies or with patient non-compliance and self-selected treatment switches in randomized clinical trials. Additive and multiplicative structural mean models have proved useful for this but suffer from the classical limitations of linear and log-linear models when accommodating binary data. We propose the generalized structural mean model to overcome these limitations. This is a semiparametric two-stage model which extends the structural mean model to handle non-linear average exposure effects. The first-stage structural model describes the causal effect of received exposure by contrasting the means of observed and potential exposure-free outcomes in exposed subsets of the population. For identification of the structural parameters, a second stage 'nuisance' model is introduced. This takes the form of a classical association model for expected outcomes given observed exposure. Under the model, we derive estimating equations which yield consistent, asymptotically normal and efficient estimators of the structural effects. We examine their robustness to model misspecification and construct robust estimators in the absence of any exposure effect. The double-logistic structural mean model is developed in more detail to estimate the effect of observed exposure on the success of treatment in a randomized controlled blood pressure reduction trial with self-selected non-compliance.  相似文献   
55.
Boundary Spaces     
While shows like The X-Files and 24 have merged conspiracy theories with popular science (fictions), some video games have been pushing the narrative even further. Electronic Art's Majestic game was released in July 2001 and quickly generated media buzz with its unusual multi-modal gameplay. Mixing phone calls, faxes, instant messaging, real and "fake' websites, and email, the game provides a fascinating case of an attempt at new directions for gaming communities. Through story, mode of playing, and use of technology, Majestic highlights the uncertain status of knowledge, community and self in a digital age; at the same time, it allows examination of alternative ways of understanding games' role and purpose in the larger culture. Drawing on intricate storylines involving government conspiracies, techno-bio warfare, murder and global terror, players were asked to solve mysteries in the hopes of preventing a devastating future of domination. Because the game drew in both actual and Majestic-owned/-designed websites, it constantly pushed those playing the game right to borders where simulation collides with " factuality'. Given the wide variety of "legitimate' conspiracy theory, alien encounters and alternative science web pages, users often could not distinguish when they were leaving the game's pages and venturing into " real' World Wide Web sites. Its further use of AOL's instant messenger system, in which gamers spoke not only to bots but to other players, pushed users to evaluate constantly both the status of those they were talking to and the information being provided. Additionally, the game required players to occupy unfamiliar subject positions, ones where agency was attenuated, and which subsequently generated a multi-layered sense of unease among players. This mix of authentic and staged information in conjunction with technologically mediated roles highlights what are often seen as phenomenon endemic to the Internet itself; that is, the destabilization of categories of knowing, relating, and being.  相似文献   
56.
57.
Determining the size and demographiccharacteristics of substance abuse populationsis extremely important for implementing publicpolicies aimed at the control of substanceabuse. Such information not only assists in theallocation of limited treatment resources bythe state, but also in the monitoring ofsubstance abuse trends over time and in theevaluation of innovative policy initiatives. Inthis study, we develop three composite measuresof treatment need. We then use these measuresto estimate treatment need for alcohol abuseand for controlled substance abuse within eachof Florida's 67 counties. This study providesan important empirical component of communityplanning, quantifying and, to a limited degree,specifying the level of need for the substanceabuse treatment of community residents. Anadditional benefit is the development of a costeffective and unobtrusive methodology fordetermining empirically when levels of need arechanging so that treatment levels can beadjusted accordingly. With proper use,policymakers can readily employ the methodologydeveloped in this study in Florida andelsewhere to make better-informed decisions inthe allocation of finite substance abusetreatment resources.  相似文献   
58.
59.
60.
In the development of many diseases there are often associated random variables which continuously reflect the progress of a subject towards the final expression of the disease (failure). At any given time these processes, which we call stochastic covariates, may provide information about the current hazard and the remaining time to failure. Likewise, in situations when the specific times of key prior events are not known, such as the time of onset of an occult tumour or the time of infection with HIV-1, it may be possible to identify a stochastic covariate which reveals, indirectly, when the event of interest occurred. The analysis of carcinogenicity trials which involve occult tumours is usually based on the time of death or sacrifice and an indicator of tumour presence for each animal in the experiment. However, the size of an occult tumour observed at the endpoint represents data concerning tumour development which may convey additional information concerning both the tumour incidence rate and the rate of death to which tumour-bearing animals are subject. We develop a stochastic model for tumour growth and suggest different ways in which the effect of this growth on the hazard of failure might be modelled. Using a combined model for tumour growth and additive competing risks of death, we show that if this tumour size information is used, assumptions concerning tumour lethality, the context of observation or multiple sacrifice times are no longer necessary in order to estimate the tumour incidence rate. Parametric estimation based on the method of maximum likelihood is outlined and is applied to simulated data from the combined model. The results of this limited study confirm that use of the stochastic covariate tumour size results in more precise estimation of the incidence rate for occult tumours.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号