首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   6002篇
  免费   160篇
  国内免费   54篇
管理学   493篇
劳动科学   3篇
民族学   41篇
人口学   56篇
丛书文集   415篇
理论方法论   138篇
综合类   2736篇
社会学   321篇
统计学   2013篇
  2025年   2篇
  2024年   19篇
  2023年   47篇
  2022年   44篇
  2021年   54篇
  2020年   121篇
  2019年   120篇
  2018年   152篇
  2017年   209篇
  2016年   149篇
  2015年   153篇
  2014年   224篇
  2013年   826篇
  2012年   411篇
  2011年   281篇
  2010年   273篇
  2009年   262篇
  2008年   279篇
  2007年   348篇
  2006年   312篇
  2005年   349篇
  2004年   292篇
  2003年   285篇
  2002年   225篇
  2001年   225篇
  2000年   146篇
  1999年   70篇
  1998年   49篇
  1997年   43篇
  1996年   51篇
  1995年   31篇
  1994年   25篇
  1993年   22篇
  1992年   15篇
  1991年   18篇
  1990年   13篇
  1989年   10篇
  1988年   16篇
  1987年   10篇
  1986年   5篇
  1985年   8篇
  1984年   3篇
  1983年   4篇
  1982年   4篇
  1981年   1篇
  1979年   1篇
  1977年   4篇
  1976年   2篇
  1975年   3篇
排序方式: 共有6216条查询结果,搜索用时 15 毫秒
21.
在资源共享时代背景下,跨区域就医可以很好地解决患者日益增长的就医需求与医疗资源紧张的矛盾。本论文以医疗联盟为研究对象,在关键医疗资源共享的前提下,通过患者跨区域就医实现就医诊断延误最小化,以满足患者就医需求。本研究同时考虑了患者跨区域交通时间与基于患者诊断类型的设备转换时间,以最小化患者就医总延迟为目标,分配患者就诊医院及优化患者就诊/检查顺序。针对该问题,论文首次提出以最早交货期原则(EDD rule)为基础,以患者再分配为主导的EDD-ReAss1和EDD-ReAss2启发式算法,结合局部搜索算法以进一步提高就医调度方案的质量,缩短患者诊断/检查等待时间。实验结果表明,新启发式算法EDD-ReAss1和EDD-ReAss2算法性能显著好于EDD,SPT和LPT等调度规则;在较短运算时间内Swap局部搜索算法性能最优。  相似文献   
22.
We develop a new approach to assessing the value of home production time based on willingness to spend time and money to obtain environmental improvements. When peoples’ choice is constrained by time as well as money, measures of willingness to pay can be defined with respect to either numeraire. In a model that explicitly allows for multiple shadow values of time, we show that the willingness to pay time and money measures are linked through the value of saving time. With survey information on peoples’ willingness to spend additional time on housework activities, as well as pay money, to obtain environmental quality improvements, joint estimation within a utility-consistent structure produces estimates of both willingness to pay and the value of saving housework time. From the value of saving housework time, the marginal value of housework time can be readily identified. When applied to Korean households’ valuation of water quality improvements in the Man Kyoung River, we find that the value of housework time is 70–80% of the market wage.
Douglas M. LarsonEmail:
  相似文献   
23.
In this paper we consider semiparametric inference methods for the time scale parameters in general time scale models (Oakes, 1995, Duchesne and Lawless, 2000). We use the results of Robins and Tsiatis (1992) and Lin and Ying (1995) to derive a rank-based estimator that is more efficient and robust than the traditional minimum coefficient of variation (min CV) estimator of Kordonsky and Gerstbakh (1993) for many underlying models. Moreover, our estimator can readily handle censored samples, which is not the case with the min CV method.  相似文献   
24.
In this paper, we consider the problem of enumerating all maximal motifs in an input string for the class of repeated motifs with wild cards. A maximal motif is such a representative motif that is not properly contained in any larger motifs with the same location lists. Although the enumeration problem for maximal motifs with wild cards has been studied in Parida et al. (2001), Pisanti et al. (2003) and Pelfrêne et al. (2003), its output-polynomial time computability has been still open. The main result of this paper is a polynomial space polynomial delay algorithm for the maximal motif enumeration problem for the repeated motifs with wild cards. This algorithm enumerates all maximal motifs in an input string of length n in O(n 3) time per motif with O(n) space, in particular O(n 3) delay. The key of the algorithm is depth-first search on a tree-shaped search route over all maximal motifs based on a technique called prefix-preserving closure extension. We also show an exponential lower bound and a succinctness result on the number of maximal motifs, which indicate the limit of a straightforward approach. The results of the computational experiments show that our algorithm can be applicable to huge string data such as genome data in practice, and does not take large additional computational cost compared to usual frequent motif mining algorithms. This work is done during the Hiroki Arimura’s visit in LIRIS, University Claude-Bernard Lyon 1, France.  相似文献   
25.
A two-step estimation approach is proposed for the fixed-effect parameters, random effects and their variance σ2 of a Poisson mixed model. In the first step, it is proposed to construct a small σ2-based approximate likelihood function of the data and utilize this function to estimate the fixed-effect parameters and σ2. In the second step, the random effects are estimated by minimizing their posterior mean squared error. Methods of Waclawiw and Liang (1993) based on so-called Stein-type estimating functions and of Breslow and Clayton (1993) based on penalized quasilikelihood are compared with the proposed likelihood method. The results of a simulation study on the performance of all three approaches are reported.  相似文献   
26.
Failure Inference From a Marker Process Based on a Bivariate Wiener Model   总被引:1,自引:0,他引:1  
Many models have been proposed that relate failure times and stochastic time-varying covariates. In some of these models, failure occurs when a particular observable marker crosses a threshold level. We are interested in the more difficult, and often more realistic, situation where failure is not related deterministically to an observable marker. In this case, joint models for marker evolution and failure tend to lead to complicated calculations for characteristics such as the marginal distribution of failure time or the joint distribution of failure time and marker value at failure. This paper presents a model based on a bivariate Wiener process in which one component represents the marker and the second, which is latent (unobservable), determines the failure time. In particular, failure occurs when the latent component crosses a threshold level. The model yields reasonably simple expressions for the characteristics mentioned above and is easy to fit to commonly occurring data that involve the marker value at the censoring time for surviving cases and the marker value and failure time for failing cases. Parametric and predictive inference are discussed, as well as model checking. An extension of the model permits the construction of a composite marker from several candidate markers that may be available. The methodology is demonstrated by a simulated example and a case application.  相似文献   
27.
This paper uses matched employee–employer LIAB data to provide panel estimates of the structure of labor demand in western Germany, 1993–2002, distinguishing between highly skilled, skilled, and unskilled labor and between the manufacturing and service sectors. Reflecting current preoccupations, our demand analysis seeks also to accommodate the impact of technology and trade in addition to wages. The bottom-line interests are to provide elasticities of the demand for unskilled (and other) labor that should assist in short-run policy design and to identify the extent of skill biases or otherwise in trade and technology.
John T. AddisonEmail:
  相似文献   
28.
Stratified randomization based on the baseline value of the primary analysis variable is common in clinical trial design. We illustrate from a theoretical viewpoint the advantage of such a stratified randomization to achieve balance of the baseline covariate. We also conclude that the estimator for the treatment effect is consistent when including both the continuous baseline covariate and the stratification factor derived from the baseline covariate. In addition, the analysis of covariance model including both the continuous covariate and the stratification factor is asymptotically no less efficient than including either only the continuous baseline value or only the stratification factor. We recommend that the continuous baseline covariate should generally be included in the analysis model. The corresponding stratification factor may also be included in the analysis model if one is not confident that the relationship between the baseline covariate and the response variable is linear. In spite of the above recommendation, one should always carefully examine relevant historical data to pre-specify the most appropriate analysis model for a perspective study.  相似文献   
29.
Michael Young and Gerard Lemos’ (1997 Young, M. and Lemos, G. 1997. The communities we have lost and can regain, London: Lemos and Crane.  [Google Scholar]) text The communities we have lost and can regain has had a substantial influence on New Labour's communitarian thinking. This paper critically examines a specific aspect of New Labour's communitarian agenda, namely, its use of public housing policy to rebuild communities in order to combat social exclusion on so-called ‘sink estates’. The paper is presented in four main parts. The first part of the paper discusses how, why and to what extent ‘community’ has been lost, with particular reference to public housing estates. The second part examines why community rebuilding is now seen as the solution to the problems caused by the loss of community on public housing estates and, to this end, pays particular attention to the communitarian values that underpin New Labour's third way. The third part of the paper examines some empirical studies of community in order to highlight the key characteristics of ‘community’ and thereby develop a critical understanding of what New Labour are currently seeking to achieve. The fourth part of the paper juxtaposes this discussion of ‘community’ with a discussion of emerging socio-economic trends that have been identified in the literature on late modernity and globalization. By highlighting emerging socio-economic trends such as residential mobility into the community debate, the paper concludes by criticizing the policy of community building as ‘good for you’. Our key point is that community building restricts the residential mobility of poorer households and exacerbates (rather than combats) their social exclusion because a key indicator of social inclusion is their ability to take advantage of the social, cultural and economic opportunities that so often exist ‘elsewhere’.  相似文献   
30.
Typical welfare and inequality measures are required to be Lorenz consistent which guarantees that inequality decreases and welfare increases as a result of a progressive transfer. We explore the implications for welfare and inequality measurement of substituting the weaker absolute differentials and deprivation quasi-orderings for the Lorenz quasi-ordering. Restricting attention to distributions of equal means, we show that the utilitarian model - the so-called expected utility model in the theory of risk - does not permit one to make a distinction between the views embedded in the differentials, deprivation and Lorenz quasi-orderings. In contrast it is possible within the dual model of M. Yaari (Econometrica 55 (1987), 99–115) to derive the restrictions to be placed on the weighting function which guarantee that the corresponding welfare orderings are consistent with the differentials and deprivation quasi-orderings respectively. Finally we drop the equal mean condition and indicate the implications of our approach for the absolute ethical inequality indices.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号