首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   388篇
  免费   16篇
管理学   121篇
民族学   1篇
人口学   21篇
丛书文集   1篇
理论方法论   58篇
综合类   7篇
社会学   153篇
统计学   42篇
  2023年   3篇
  2021年   5篇
  2020年   9篇
  2019年   7篇
  2018年   5篇
  2017年   12篇
  2016年   13篇
  2015年   12篇
  2014年   17篇
  2013年   47篇
  2012年   18篇
  2011年   15篇
  2010年   13篇
  2009年   12篇
  2008年   11篇
  2007年   13篇
  2006年   12篇
  2005年   7篇
  2004年   10篇
  2003年   8篇
  2002年   8篇
  2001年   8篇
  2000年   7篇
  1999年   6篇
  1998年   6篇
  1997年   4篇
  1996年   3篇
  1995年   12篇
  1994年   4篇
  1993年   5篇
  1992年   4篇
  1991年   10篇
  1989年   6篇
  1987年   3篇
  1986年   2篇
  1985年   5篇
  1984年   5篇
  1983年   7篇
  1982年   8篇
  1981年   7篇
  1980年   2篇
  1979年   6篇
  1978年   4篇
  1977年   2篇
  1976年   3篇
  1975年   2篇
  1973年   2篇
  1970年   2篇
  1968年   2篇
  1966年   2篇
排序方式: 共有404条查询结果,搜索用时 31 毫秒
311.
This paper presents the perspectives of quantitative techniques in academics and practice. Based on the findings of an empirical study, the academicians and practitioners emphasize different techniques and prefer different journals for keeping abreast. This reveals the areas for curriculum improvement to orient the program toward the practitioners.  相似文献   
312.
This commentary evaluates the usefulness of the Freed and Glover [6] linear programming approach to the discriminant problem, relates linear programming to other parametric and nonparametric approaches, and evaluates the linear programming approach.  相似文献   
313.
This paper presents a new linear model methodology for clustering judges with homogeneous decision policies and differentiating dimensions which distinguish judgment policies. This linear policy capturing model based on canonical correlation analysis is compared to the standard model based on regression analysis and hierarchical agglomerative clustering. Potential advantages of the new methodology include simultaneous instead of sequential consideration of information in the dependent and independent variable sets, decreased interpretational difficulty in the presence of multicollinearity and/or suppressor/moderator variables, and a more clearly defined solution structure allowing assessment of a judge's relationship to all of the derived, ideal policy types. An application to capturing policies of information systems recruiters responsible for hiring entry-level personnel is used to compare and contrast the two techniques.  相似文献   
314.
Exploring organisations is the prerequisite for any intentional attempt to strategic change. Yet, what is it that we observe when we observe organisations? The argument chooses a narrative approach to exploring organisations. With Niklas Luhmann we look at the operations of organising which makes the organisation an organisation. The paper suggests the organisational collage (I.) of stories as a starting point of the exploration. The specific focus is on meaning-creation and sense-making as the genuine act of organisational self-observation. The disciplinary matrix (II.) reflects on how stories and narratives crystallise and rule the organisation in a paradigmatic way. Along Thomas Kuhn's understanding of paradigms (III.) management is referenced as an activity of a community of practice based on a disciplinary matrix of models, methods and instruments. Giorgio Agamben's conceptualisation of paradigms as reference giving examples allows the opening up of the implicit side of organisational culture. Memetics (V.) approaching reference giving examples as memes and culture as a meme-complex enable the observation of dynamics and cultural evolution over time. Concluding we come to understand the organisational implications (V.) of the conservative nature of organisational development and the systemic sensitivity that allows for management, learning and change. And as always, advances in research come at the price of new questions.  相似文献   
315.
It is essential to reduce data latency and guarantee quality of service for modern computer networks. The emerging networking protocol, Multipath Transmission Control Protocol, can reduce data latency by transmitting data through multiple minimal paths (MPs) and ensure data integrity by the packets retransmission mechanism. The bandwidth of each edge can be considered as multi-state in computer networks because different situations, such as failures, partial failures and maintenance, exist. We evaluate network reliability for a multi-state retransmission flow network through which the data can be successfully transmitted by means of multiple MPs under the time constraint. By generating all minimal bandwidth patterns, the proposed algorithm can satisfy these requirements to calculate network reliability. An example and a practical case of the Pan-European Research and Education Network are applied to demonstrate the proposed algorithm.  相似文献   
316.
317.
This article considers a circular regression model for clustered data, where both the cluster effects and the regression errors have von Mises distributions. It involves β, a vector of parameters for the fixed effects, and two concentration parameters for the error distribution. A measure of intra‐cluster circular correlation and a predictor for an unobserved cluster random effect are studied. Preliminary estimators for the vector β and the two concentration parameters are proposed, and their performance is compared with that of the maximum likelihood estimators in a simulation study. A numerical example investigating the factors impacting the orientation taken by a sand hopper when released is presented. The Canadian Journal of Statistics 47: 712–728; 2019 © 2019 Statistical Society of Canada  相似文献   
318.
Model averaging for dichotomous dose–response estimation is preferred to estimate the benchmark dose (BMD) from a single model, but challenges remain regarding implementing these methods for general analyses before model averaging is feasible to use in many risk assessment applications, and there is little work on Bayesian methods that include informative prior information for both the models and the parameters of the constituent models. This article introduces a novel approach that addresses many of the challenges seen while providing a fully Bayesian framework. Furthermore, in contrast to methods that use Monte Carlo Markov Chain, we approximate the posterior density using maximum a posteriori estimation. The approximation allows for an accurate and reproducible estimate while maintaining the speed of maximum likelihood, which is crucial in many applications such as processing massive high throughput data sets. We assess this method by applying it to empirical laboratory dose–response data and measuring the coverage of confidence limits for the BMD. We compare the coverage of this method to that of other approaches using the same set of models. Through the simulation study, the method is shown to be markedly superior to the traditional approach of selecting a single preferred model (e.g., from the U.S. EPA BMD software) for the analysis of dichotomous data and is comparable or superior to the other approaches.  相似文献   
319.
320.
Adaptive Spatial Sampling of Contaminated Soil   总被引:1,自引:0,他引:1  
Cox  Louis Anthony 《Risk analysis》1999,19(6):1059-1069

Suppose that a residential neighborhood may have been contaminated by a nearby abandoned hazardous waste site. The suspected contamination consists of elevated soil concentrations of chemicals that are also found in the absence of site-related contamination. How should a risk manager decide which residential properties to sample and which ones to clean? This paper introduces an adaptive spatial sampling approach which uses initial observations to guide subsequent search. Unlike some recent model-based spatial data analysis methods, it does not require any specific statistical model for the spatial distribution of hazards, but instead constructs an increasingly accurate nonparametric approximation to it as sampling proceeds. Possible cost-effective sampling and cleanup decision rules are described by decision parameters such as the number of randomly selected locations used to initialize the process, the number of highest-concentration locations searched around, the number of samples taken at each location, a stopping rule, and a remediation action threshold. These decision parameters are optimized by simulating the performance of each decision rule. The simulation is performed using the data collected so far to impute multiple probable values of unknown soil concentration distributions during each simulation run. This optimized adaptive spatial sampling technique has been applied to real data using error probabilities for wrongly cleaning or wrongly failing to clean each location (compared to the action that would be taken if perfect information were available) as evaluation criteria. It provides a practical approach for quantifying trade-offs between these different types of errors and expected cost. It also identifies strategies that are undominated with respect to all of these criteria.

  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号