首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   2011篇
  免费   55篇
管理学   294篇
民族学   13篇
人才学   1篇
人口学   190篇
丛书文集   4篇
理论方法论   244篇
综合类   20篇
社会学   1023篇
统计学   277篇
  2023年   17篇
  2022年   8篇
  2021年   21篇
  2020年   36篇
  2019年   57篇
  2018年   47篇
  2017年   73篇
  2016年   83篇
  2015年   52篇
  2014年   50篇
  2013年   269篇
  2012年   97篇
  2011年   85篇
  2010年   62篇
  2009年   66篇
  2008年   63篇
  2007年   92篇
  2006年   77篇
  2005年   68篇
  2004年   71篇
  2003年   62篇
  2002年   52篇
  2001年   54篇
  2000年   43篇
  1999年   49篇
  1998年   30篇
  1997年   19篇
  1996年   29篇
  1995年   20篇
  1994年   26篇
  1993年   14篇
  1992年   26篇
  1991年   20篇
  1990年   18篇
  1989年   18篇
  1988年   16篇
  1987年   11篇
  1986年   19篇
  1985年   13篇
  1984年   11篇
  1983年   15篇
  1982年   15篇
  1981年   8篇
  1980年   12篇
  1979年   19篇
  1978年   11篇
  1976年   5篇
  1975年   4篇
  1974年   7篇
  1964年   3篇
排序方式: 共有2066条查询结果,搜索用时 647 毫秒
831.
Conventional wisdom is that a high trade union bargaining strength and a system of coordinated wage bargaining reduce the attractiveness of an economy as a location for foreign direct investment, although there is limited evidence for this. The paper takes panel data for 19 OECD economies to examine the relationship between trade union bargaining strength, bargaining coordi nation, and a range of incentives for inward foreign direct investment. It finds a strong negative effect of trade union density on inward foreign direct investment, which is dependent on the degree of wage bargaining coordination. A high degree of coordination weakens the deterrent effect of high union density, which is consistent with the notion that under certain circumstances a coordinated increase in wages can increase profits of the multinationals by hurting domestic firms.  相似文献   
832.
This study examines the relation between leadership and team cohesiveness in different societal cultures. We expect direct effects of societal culture on leadership and team cohesiveness, as well as a moderating effect of culture on the relationship between leadership and cohesiveness. Data were collected from 29,868 managers and 138,270 corresponding team members in 80 countries. Multilevel analysis was used to test the hypotheses, relating societal individualism–collectivism (IC), with directive and supportive leadership, and with team cohesiveness. In individualistic societies managers use less directive and less supportive behavior, compared with collectivistic societies. Team cohesiveness is not directly related with IC. Directive leadership and supportive leadership are negatively and positively related with team cohesiveness respectively and these relations are stronger in individualistic societies. Implications for managerial education and practices are discussed.  相似文献   
833.
Since the seminal work of Ford and Fulkerson in the 1950s, network flow theory is one of the most important and most active areas of research in combinatorial optimization. Coming from the classical maximum flow problem, we introduce and study an apparently basic but new flow problem that features a couple of interesting peculiarities. We derive several results on the complexity and approximability of the new problem. On the way we also discover two closely related basic covering and packing problems that are of independent interest. Starting from an LP formulation of the maximum s-t-flow problem in path variables, we introduce unit upper bounds on the amount of flow being sent along each path. The resulting (fractional) flow problem is NP-hard; its integral version is strongly NP-hard already on very simple classes of graphs. For the fractional problem we present an FPTAS that is based on solving the k shortest paths problem iteratively. We show that the integral problem is hard to approximate and give an interesting O(log?m)-approximation algorithm, where m is the number of arcs in the considered graph. For the multicommodity version of the problem there is an $O(\sqrt{m})Since the seminal work of Ford and Fulkerson in the 1950s, network flow theory is one of the most important and most active areas of research in combinatorial optimization. Coming from the classical maximum flow problem, we introduce and study an apparently basic but new flow problem that features a couple of interesting peculiarities. We derive several results on the complexity and approximability of the new problem. On the way we also discover two closely related basic covering and packing problems that are of independent interest. Starting from an LP formulation of the maximum s-t-flow problem in path variables, we introduce unit upper bounds on the amount of flow being sent along each path. The resulting (fractional) flow problem is NP-hard; its integral version is strongly NP-hard already on very simple classes of graphs. For the fractional problem we present an FPTAS that is based on solving the k shortest paths problem iteratively. We show that the integral problem is hard to approximate and give an interesting O(log m)-approximation algorithm, where m is the number of arcs in the considered graph. For the multicommodity version of the problem there is an O(?m)O(\sqrt{m}) -approximation algorithm. We argue that this performance guarantee is best possible, unless P=NP.  相似文献   
834.
This paper considers the concepts of leverage and influence in the linear regression model with correlated errors when the error covariance structure is completely specified. Generalizations of the usual measures are given. Extensions of residuals also naturally arise. The theory is illustrated using two examples  相似文献   
835.
836.
Abstract

We propose a difference-in-differences approach for disentangling a total treatment effect within specific subpopulations into a direct effect and an indirect effect operating through a binary mediating variable. Random treatment assignment along with specific common trend and effect homogeneity assumptions identify the direct effects on the always and never takers, whose mediator is not affected by the treatment, as well as the direct and indirect effects on the compliers, whose mediator reacts to the treatment. In our empirical application, we analyze the impact of the Vietnam draft lottery on political preferences. The results suggest that a high draft risk due to the draft lottery outcome leads to an increase in mild preferences for the Republican Party, but has no effect on strong preferences for either party or on specific political attitudes. The increase in Republican support is mostly driven by the direct effect not operating through the mediator that is military service.  相似文献   
837.
The riskiness of two investments can be compared by looking at the ratio of the respective Value-at-Risk's (VaRs) or the ratio of volatilities. The exact distribution of the ratio of two volatilities calculated from normal observations and an asymptotic confidence interval for the ratio of two VaRs is derived. A simulation study shows good coverage rates for ratios of VaRs calculated from observations from distributions commonly used to model logarithmic returns.  相似文献   
838.
In a recent article, Cardoso de Oliveira and Ferreira have proposed a multivariate extension of the univariate chi-squared normality test, using a known result for the distribution of quadratic forms in normal variables. In this article, we propose a family of power divergence type test statistics for testing the hypothesis of multinormality. The proposed family of test statistics includes as a particular case the test proposed by Cardoso de Oliveira and Ferreira. We assess the performance of the new family of test statistics by using Monte Carlo simulation. In this context, the type I error rates and the power of the tests are studied, for important family members. Moreover, the performance of significant members of the proposed test statistics are compared with the respective performance of a multivariate normality test, proposed recently by Batsidis and Zografos. Finally, two well-known data sets are used to illustrate the method developed in this article as well as the specialized test of multivariate normality proposed by Batsidis and Zografos.  相似文献   
839.
Age-conditional probabilities of developing a first cancer represent the transition from being cancer-free to developing a first cancer. Natural inputs into their calculation are rates of first cancer per person-years alive and cancer-free. However these rates are not readily available because they require information on the cancer-free population. Instead rates of first cancer per person-years alive, calculated using as denominator the mid-year populations, available from census data, can be easily calculated from cancer registry data. Methods have been developed to estimate age-conditional probabilities of developing cancer based on these easily available rates per person-years alive that do not directly account for the cancer-free population. In the last few years models (Merrill et al., Int J Epidemiol 29(2):197-207, 2000; Mariotto et al., SEER Cancer Statistics Review, 2002; Clegg et al., Biometrics 58(3):684-688, 2002; Gigli et al., Stat Methods Med Res 15(3):235-253, 2006, and software (ComPrev:Complete Prevalence Software, Version 1.0, 2005) have been developed that allow estimation of cancer prevalence (DevCan: Probability of Developing or Dying of Cancer Software, Version 6.0, 2005). Estimates of population-based cancer prevalence allows for the estimation of the cancer-free population and consequently of rates per person-years alive and cancer-free. In this paper we present a method that directly estimates the age-conditional probabilities of developing a first cancer using rates per person-years alive and cancer-free obtained from prevalence estimates. We explore conditions when the previous and the new estimators give similar or different values using real data from the Surveillance, Epidemiology and End Results (SEER) program.  相似文献   
840.
Missing variances, on the basis of the summary-level data, can be a problem when an inverse variance weighted meta-analysis is undertaken. A wide range of approaches in dealing with this issue exist, such as excluding data without a variance measure, using a function of sample size as a weight and imputing the missing standard errors/deviations. A non-linear mixed effects modelling approach was taken to describe the time-course of standard deviations across 14 studies. The model was then used to make predictions of the missing standard deviations, thus, enabling a precision weighted model-based meta-analysis of a mean pain endpoint over time. Maximum likelihood and Bayesian approaches were implemented with example code to illustrate how this imputation can be carried out and to compare the output from each method. The resultant imputations were nearly identical for the two approaches. This modelling approach acknowledges the fact that standard deviations are not necessarily constant over time and can differ between treatments and across studies in a predictable way.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号