首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   17972篇
  免费   158篇
管理学   2625篇
民族学   126篇
人口学   3114篇
丛书文集   26篇
理论方法论   1064篇
综合类   380篇
社会学   7680篇
统计学   3115篇
  2020年   117篇
  2019年   151篇
  2018年   1824篇
  2017年   1876篇
  2016年   1238篇
  2015年   164篇
  2014年   223篇
  2013年   1342篇
  2012年   560篇
  2011年   1338篇
  2010年   1187篇
  2009年   926篇
  2008年   943篇
  2007年   1137篇
  2006年   160篇
  2005年   379篇
  2004年   387篇
  2003年   349篇
  2002年   227篇
  2001年   202篇
  2000年   199篇
  1999年   177篇
  1998年   129篇
  1997年   126篇
  1996年   154篇
  1995年   106篇
  1994年   104篇
  1993年   97篇
  1992年   140篇
  1991年   119篇
  1990年   116篇
  1989年   111篇
  1988年   104篇
  1987年   90篇
  1986年   95篇
  1985年   107篇
  1984年   112篇
  1983年   123篇
  1982年   98篇
  1981年   88篇
  1980年   85篇
  1979年   94篇
  1978年   90篇
  1977年   80篇
  1976年   67篇
  1975年   67篇
  1974年   58篇
  1973年   58篇
  1972年   41篇
  1971年   49篇
排序方式: 共有10000条查询结果,搜索用时 15 毫秒
991.
Floods are a natural hazard evolving in space and time according to meteorological and river basin dynamics, so that a single flood event can affect different regions over the event duration. This physical mechanism introduces spatio‐temporal relationships between flood records and losses at different locations over a given time window that should be taken into account for an effective assessment of the collective flood risk. However, since extreme floods are rare events, the limited number of historical records usually prevents a reliable frequency analysis. To overcome this limit, we move from the analysis of extreme events to the modeling of continuous stream flow records preserving spatio‐temporal correlation structures of the entire process, and making a more efficient use of the information provided by continuous flow records. The approach is based on the dynamic copula framework, which allows for splitting the modeling of spatio‐temporal properties by coupling suitable time series models accounting for temporal dynamics, and multivariate distributions describing spatial dependence. The model is applied to 490 stream flow sequences recorded across 10 of the largest river basins in central and eastern Europe (Danube, Rhine, Elbe, Oder, Waser, Meuse, Rhone, Seine, Loire, and Garonne). Using available proxy data to quantify local flood exposure and vulnerability, we show that the temporal dependence exerts a key role in reproducing interannual persistence, and thus magnitude and frequency of annual proxy flood losses aggregated at a basin‐wide scale, while copulas allow the preservation of the spatial dependence of losses at weekly and annual time scales.  相似文献   
992.
993.
Abstract

The ongoing digital transformation on industry has so far mostly been studied from the perspective of cyber-physical systems solutions as drivers of change. In this paper, we turn the focus to the changes in data management resulting from the introduction of new digital technologies in industry. So far, data processing activities in operations management have usually been organised according to the existing business structures inside and in-between companies. With increasing importance of Big Data in the context of the digital transformation, the opposite will be the case: business structures will evolve based on the potential to develop value streams offered on the basis of new data processing solutions. Based on a review of the extant literature, we identify the general different fields of action for operations management related to data processing. In particular, we explore the impact of Big Data on industrial operations and its organisational implications.  相似文献   
994.
Decades of questionnaire and interview studies have revealed various leadership behaviors observed in successful leaders. However, little is known about the actual behaviors that cause those observations. Given that lay observers are prone to cognitive biases, such as the halo effect, the validity of theories that are exclusively based on observed behaviors is questionable. We thus follow the call of leading scientists in the field and derive a parsimonious model of leadership behavior that is informed by established psychological theories. Building on the taxonomy of Yukl (2012), we propose three task-oriented behavior categories (enhancing understanding, strengthening motivation and facilitating implementation) and three relation-oriented behavior categories (fostering coordination, promoting cooperation and activating resources), each of which is further specified by a number of distinct behaviors. While the task-oriented behaviors are directed towards the accomplishment of shared objectives, the relation-oriented behaviors support this process by increasing the coordinated engagement of the team members. Our model contributes to the advancement of leadership behavior theory by (1) consolidating current taxonomies, (2) sharpening behavioral concepts of leadership behavior, (3) specifying precise relationships between those categories and (4) spurring new hypotheses that can be derived from existing findings in the field of psychology. To test our model as well as the hypotheses derived from this model, we advocate the development of new measurements that overcome the limitations associated with questionnaire and interview studies.  相似文献   
995.
The Kidney Exchange Problem (KEP) is a combinatorial optimization problem and has attracted the attention from the community of integer programming/combinatorial optimisation in the past few years. Defined on a directed graph, the KEP has two variations: one concerns cycles only, and the other, cycles as well as chains on the same graph. We call the former a Cardinality Constrained Multi-cycle Problem (CCMcP) and the latter a Cardinality Constrained Cycles and Chains Problem (CCCCP). The cardinality for cycles is restricted in both CCMcP and CCCCP. As for chains, some studies in the literature considered cardinality restrictions, whereas others did not. The CCMcP can be viewed as an Asymmetric Travelling Salesman Problem that does allow subtours, however these subtours are constrained by cardinality, and that it is not necessary to visit all vertices. In existing literature of the KEP, the cardinality constraint for cycles is usually considered to be small (to the best of our knowledge, no more than six). In a CCCCP, each vertex on the directed graph can be included in at most one cycle or chain, but not both. The CCMcP and the CCCCP are interesting and challenging combinatorial optimization problems in their own rights, particularly due to their similarities to some travelling salesman- and vehicle routing-family of problems. In this paper, our main focus is to review the existing mathematical programming models and solution methods in the literature, analyse the performance of these models, and identify future research directions. Further, we propose a polynomial-sized and an exponential-sized mixed-integer linear programming model, discuss a number of stronger constraints for cardinality-infeasible-cycle elimination for the latter, and present some preliminary numerical results.  相似文献   
996.
In several areas like global optimization using branch-and-bound methods for mixture design, the unit n-simplex is refined by longest edge bisection (LEB). This process provides a binary search tree. For \(n>2\), simplices appearing during the refinement process can have more than one longest edge (LE). The size of the resulting binary tree depends on the specific sequence of bisected longest edges. The questions are how to calculate the size of one of the smallest binary trees generated by LEB and how to find the corresponding sequence of LEs to bisect, which can be represented by a set of LE indices. Algorithms answering these questions are presented here. We focus on sets of LE indices that are repeated at a level of the binary tree. A set of LEs was presented in Aparicio et al. (Informatica 26(1):17–32, 2015), for \(n=3\). An additional question is whether this set is the best one under the so-called \(m_k\)-valid condition.  相似文献   
997.
Neighbourly set of a graph is a subset of edges which either share an end point or are joined by an edge of that graph. The maximum cardinality neighbourly set problem is known to be NP-complete for general graphs. Mahdian (Discret Appl Math 118:239–248, 2002) proved that it is in polynomial time for quadrilateral-free graphs and proposed an \(O(n^{11})\) algorithm for the same, here n is the number of vertices in the graph, (along with a note that by a straightforward but lengthy argument it can be proved to be solvable in \(O(n^5)\) running time). In this paper we propose an \(O(n^2)\) time algorithm for finding a maximum cardinality neighbourly set in a quadrilateral-free graph.  相似文献   
998.
The linear sum assignment problem is a fundamental combinatorial optimisation problem and can be broadly defined as: given an \(n \times m, m \ge n\) benefit matrix \(B = (b_{ij})\), matching each row to a different column so that the sum of entries at the row-column intersections is maximised. This paper describes the application of a new fast heuristic algorithm, Asymmetric Greedy Search, to the asymmetric version (\(n \ne m\)) of the linear sum assignment problem. Extensive computational experiments, using a range of model graphs demonstrate the effectiveness of the algorithm. The heuristic was also incorporated within an algorithm for the non-sequential protein structure matching problem where non-sequential alignment between two proteins, normally of different numbers of amino acids, needs to be maximised.  相似文献   
999.
MapReduce system is a popular big data processing framework, and the performance of it is closely related to the efficiency of the centralized scheduler. In practice, the centralized scheduler often has little information in advance, which means each job may be known only after being released. In this paper, hence, we consider the online MapReduce scheduling problem of minimizing the makespan, where jobs are released over time. Both preemptive and non-preemptive version of the problem are considered. In addition, we assume that reduce tasks cannot be parallelized because they are often complex and hard to be decomposed. For the non-preemptive version, we prove the lower bound is \(\frac{m+m(\Psi (m)-\Psi (k))}{k+m(\Psi (m)-\Psi (k))}\), higher than the basic online machine scheduling problem, where k is the root of the equation \(k=\big \lfloor {\frac{m-k}{1+\Psi (m)-\Psi (k)}+1 }\big \rfloor \) and m is the quantity of machines. Then we devise an \((2-\frac{1}{m})\)-competitive online algorithm called MF-LPT (Map First-Longest Processing Time) based on the LPT. For the preemptive version, we present a 1-competitive algorithm for two machines.  相似文献   
1000.
In this paper we define the exact k-coverage problem, and study it for the special cases of intervals and circular-arcs. Given a set system consisting of a ground set of n points with integer demands \(\{d_0,\dots ,d_{n-1}\}\) and integer rewards, subsets of points, and an integer k, select up to k subsets such that the sum of rewards of the covered points is maximized, where point i is covered if exactly \(d_i\) subsets containing it are selected. Here we study this problem and some related optimization problems. We prove that the exact k-coverage problem with unbounded demands is NP-hard even for intervals on the real line and unit rewards. Our NP-hardness proof uses instances where some of the natural parameters of the problem are unbounded (each of these parameters is linear in the number of points). We show that this property is essential, as if we restrict (at least) one of these parameters to be a constant, then the problem is polynomial time solvable. Our polynomial time algorithms are given for various generalizations of the problem (in the setting where one of the parameters is a constant).  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号