首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
  示例: 沙坡头地区,人工植被区,变化  检索词用空格隔开表示必须包含全部检索词,用“,”隔开表示只需满足任一检索词即可!
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   5025篇
  免费   25篇
管理学   876篇
民族学   32篇
人才学   1篇
人口学   379篇
丛书文集   33篇
理论方法论   512篇
综合类   50篇
社会学   2428篇
统计学   739篇
  2024年   29篇
  2023年   43篇
  2022年   23篇
  2021年   43篇
  2020年   118篇
  2019年   152篇
  2018年   134篇
  2017年   188篇
  2016年   167篇
  2015年   137篇
  2014年   163篇
  2013年   735篇
  2012年   197篇
  2011年   180篇
  2010年   139篇
  2009年   141篇
  2008年   150篇
  2007年   129篇
  2006年   135篇
  2005年   152篇
  2004年   146篇
  2003年   130篇
  2002年   133篇
  2001年   132篇
  2000年   105篇
  1999年   93篇
  1998年   73篇
  1997年   68篇
  1996年   63篇
  1995年   60篇
  1994年   57篇
  1993年   66篇
  1992年   50篇
  1991年   54篇
  1990年   42篇
  1989年   28篇
  1988年   49篇
  1987年   42篇
  1986年   40篇
  1985年   38篇
  1984年   52篇
  1983年   40篇
  1982年   40篇
  1981年   43篇
  1980年   36篇
  1979年   34篇
  1978年   24篇
  1977年   27篇
  1975年   29篇
  1974年   26篇
排序方式: 共有5050条查询结果,搜索用时 0 毫秒
81.
    
Organizations that seek the advantages of 24-hour operations frequently experience personnel problems related to the demands of shiftwork. Common difficulties include excessive turnover, poor productivity, and increased incidence of industrial accidents. This article describes the experience of a glass company facing high turnover stemming from employee dissatisfaction with shiftwork in one of its continuous operation factories. Designed as a high-involvement organization, the factory formed an employee task force to analyze the turnover problem and develop recommendations. Once the shift system was identified as a major factor contributing to employee turnover, a team of employees and managers was formed to design a new one. Following the adoption of the new shift system, turnover was reduced significantly. Based on this organization's experience, a general strategy for shift system design is proposed.  相似文献   
82.
    
Existing methods for meta‐analysis of diagnostic test accuracy focus primarily on a single index test. We propose models for the joint meta‐analysis of studies comparing multiple index tests on the same participants in paired designs. These models respect the grouping of data by studies, account for the within‐study correlation between the tests' true‐positive rates (TPRs) and between their false‐positive rates (FPRs) (induced because tests are applied to the same participants), and allow for between‐study correlations between TPRs and FPRs (such as those induced by threshold effects). We estimate models in the Bayesian setting. We demonstrate using a meta‐analysis of screening for Down syndrome with two tests: shortened humerus (arm bone), and shortened femur (thigh bone). Separate and joint meta‐analyses yielded similar TPR and FPR estimates. For example, the summary TPR for a shortened humerus was 35.3% (95% credible interval (CrI): 26.9, 41.8%) versus 37.9% (27.7, 50.3%) with joint versus separate meta‐analysis. Joint meta‐analysis is more efficient when calculating comparative accuracy: the difference in the summary TPRs was 0.0% (−8.9, 9.5%; TPR higher for shortened humerus) with joint versus 2.6% (−14.7, 19.8%) with separate meta‐analyses. Simulation and empirical analyses are needed to refine the role of the proposed methodology. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   
83.
    
Kernel smoothing and filtering techniques are undemanding in their data generation assumptions but have limitations where special interest attaches to more recent observations. A methodology is developed that addresses contingencies such as end correction and the kernel term structure within the same technology, namely scale invariant kernel compression. The framework is built around an entropic transformation of the standard uniform moving average, augmented with kernel compressions utilizing entropic weight redistribution. The techniques are illustrated with data drawn from climate change. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   
84.
    
The history of Bayesian statistics is traced, from a personal perspective, through various strands and via its re‐genesis during the 1960s to the current day. Emphasis is placed on broad‐sense Bayesian methodology that can be used to meaningfully analyze observed datasets. Over 750 people in science, medicine, and socioeconomics, who have influenced the evolution of the Bayesian approach into the powerful paradigm that it is today, are highlighted. The frequentist/Bayesian controversy is addressed, together with the ways in which many Bayesians combine the two ideologies as a Bayes/non‐Bayes compromise, e.g., when drawing inferences about unknown parameters or when investigating the choice of sampling model in relation to its real‐life background. A number of fundamental issues are discussed and critically examined, and some elementary explanations for nontechnical readers and some personal reminiscences are included. Some of the Bayesian contributions of the 21st century are subjected to more detailed critique, so that readers may learn more about the quality and relevance of the ongoing research. A recent resolution of Lindley's paradox by Baskurt and Evans is reported. The axioms of subjective probability are reassessed, some state‐of‐the‐art alternatives to Leonard Savage's axioms of utility are discussed, and Deborah Mayo and Michael Evan's refutation of Allan Birnbaum's 1962 justification of the likelihood principle in terms of the sufficiency and conditionality principles is addressed. WIREs Comput Stat 2014, 6:80–115. doi: 10.1002/wics.1293 This article is categorized under:
  • Statistical and Graphical Methods of Data Analysis > Bayesian Methods and Theory
  相似文献   
85.
86.
    
A pre‐pack is a collection of items used in retail distribution. By grouping multiple units of one or more stock keeping units (SKU), distribution and handling costs can be reduced; however, ordering flexibility at the retail outlet is limited. This paper studies an inventory system at a retail level where both pre‐packs and individual items (at additional handling cost) can be ordered. For a single‐SKU, single‐period problem, we show that the optimal policy is to order into a “band” with as few individual units as possible. For the multi‐period problem with modular demand, the band policy is still optimal, and the steady‐state distribution of the target inventory position possesses a semi‐uniform structure, which greatly facilitates the computation of optimal policies and approximations under general demand. For the multi‐SKU case, the optimal policy has a generalized band structure. Our numerical results show that pre‐pack use is beneficial when facing stable and complementary demands, and substantial handling savings at the distribution center. The cost premium of using simple policies, such as strict base‐stock and batch‐ordering (pre‐packs only), can be substantial for medium parameter ranges.  相似文献   
87.
    
Recursive partitioning algorithms separate a feature space into a set of disjoint rectangles. Then, usually, a constant in every partition is fitted. While this is a simple and intuitive approach, it may still lack interpretability as to how a specific relationship between dependent and independent variables may look. Or it may be that a certain model is assumed or of interest and there is a number of candidate variables that may non-linearly give rise to different model parameter values. We present an approach that combines generalized linear models (GLM) with recursive partitioning that offers enhanced interpretability of classical trees as well as providing an explorative way to assess a candidate variable's influence on a parametric model. This method conducts recursive partitioning of a GLM by (1) fitting the model to the data set, (2) testing for parameter instability over a set of partitioning variables, (3) splitting the data set with respect to the variable associated with the highest instability. The outcome is a tree where each terminal node is associated with a GLM. We will show the method's versatility and suitability to gain additional insight into the relationship of dependent and independent variables by two examples, modelling voting behaviour and a failure model for debt amortization, and compare it to alternative approaches.  相似文献   
88.
    
Distance equalizers are introduced as empirical measures of central tendency that make distances to univariate data as similar as possible. These measures are made precise by means of various so-called fluctuation functions which account for distances in different ways. Distance equalizers differ from the mean as well as from the median. Also, distance equalizers relate to dispersion measures. Algorithms and closed-form solutions for special cases are given. Some computations require to perform multiextremal function minimization. Distance equalization is extendable to data from higher dimensions and to function quantization in signal processing.  相似文献   
89.
    
We investigate pricing incentives for competing retailers who distribute two variants of a manufacturer's product in a decentralized supply chain. Under a two‐dimensional Hotelling model, we derive decentralized retailers' prices for the products, and distortions in pricing when compared to centrally optimal prices. We show that price distortions decrease as consumers' travel cost between retailers increases, due to less intense competition. However, price distortions do not change monotonically in consumers' switching cost between products within stores. To fix decentralized retailers' price distortions, we construct a two‐part pricing contract that coordinates the supply chain. We show that the coordinating contract is Pareto‐improving and analyze increase in the supply chain profit under coordination.  相似文献   
90.
    
We propose the use of signal detection theory (SDT) to evaluate the performance of both probabilistic forecasting systems and individual forecasters. The main advantage of SDT is that it provides a principled way to distinguish the response from system diagnosticity, which is defined as the ability to distinguish events that occur from those that do not. There are two challenges in applying SDT to probabilistic forecasts. First, the SDT model must handle judged probabilities rather than the conventional binary decisions. Second, the model must be able to operate in the presence of sparse data generated within the context of human forecasting systems. Our approach is to specify a model of how individual forecasts are generated from underlying representations and use Bayesian inference to estimate the underlying latent parameters. Given our estimate of the underlying representations, features of the classic SDT model, such as the receiver operating characteristic (ROC) curve and the area under the ROC curve (AUC), follow immediately. We show how our approach allows ROC curves and AUCs to be applied to individuals within a group of forecasters, estimated as a function of time, and extended to measure differences in forecastability across different domains. Among the advantages of this method is that it depends only on the ordinal properties of the probabilistic forecasts. We conclude with a brief discussion of how this approach might facilitate decision making.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号