首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   513篇
  免费   7篇
管理学   118篇
民族学   8篇
人才学   1篇
人口学   22篇
丛书文集   1篇
理论方法论   13篇
综合类   1篇
社会学   88篇
统计学   268篇
  2023年   2篇
  2022年   12篇
  2021年   6篇
  2020年   22篇
  2019年   29篇
  2018年   37篇
  2017年   41篇
  2016年   26篇
  2015年   19篇
  2014年   13篇
  2013年   153篇
  2012年   17篇
  2011年   13篇
  2010年   13篇
  2009年   13篇
  2008年   11篇
  2007年   15篇
  2006年   6篇
  2005年   7篇
  2004年   5篇
  2003年   6篇
  2002年   3篇
  2001年   4篇
  2000年   6篇
  1999年   3篇
  1998年   1篇
  1997年   4篇
  1995年   1篇
  1994年   1篇
  1992年   5篇
  1991年   1篇
  1990年   1篇
  1986年   1篇
  1985年   4篇
  1984年   4篇
  1983年   2篇
  1982年   3篇
  1981年   1篇
  1980年   1篇
  1979年   2篇
  1978年   2篇
  1976年   2篇
  1975年   1篇
  1971年   1篇
排序方式: 共有520条查询结果,搜索用时 109 毫秒
81.
Significant advances in information technology have brought about increased demand for bandwidth. Buyers of bandwidth often encounter bandwidth prices that are decreasing over time. Additionally, bandwidth prices at any point in time are decreasing in total bandwidth purchased and length of contracts. Therefore, buyers face complex decisions in terms of the number of contracts to buy, their bandwidth, and their lengths. In this article, we formulate models for the acquisition of bandwidth from a buyer's perspective. We begin with a model that allows varying contract durations under deterministic demand and without allowing shortages or overlapping contracts. We then formulate a simpler model, which restricts contract lengths over the planning horizon to be equal. We also solve the problem under probabilistic demand and allowing for shortages, which are satisfied by buying additional bandwidth at a premium. We perform numerical sensitivity analysis to compare the results of the models and illustrate the results with numerical examples. The numerical analyses illustrate that using relatively simple equal‐length contracts produces approximately the same results as the more complicated unequal‐length contract strategy.  相似文献   
82.
A wide range of uncertainties will be introduced inevitably during the process of performing a safety assessment of engineering systems. The impact of all these uncertainties must be addressed if the analysis is to serve as a tool in the decision-making process. Uncertainties present in the components (input parameters of model or basic events) of model output are propagated to quantify its impact in the final results. There are several methods available in the literature, namely, method of moments, discrete probability analysis, Monte Carlo simulation, fuzzy arithmetic, and Dempster-Shafer theory. All the methods are different in terms of characterizing at the component level and also in propagating to the system level. All these methods have different desirable and undesirable features, making them more or less useful in different situations. In the probabilistic framework, which is most widely used, probability distribution is used to characterize uncertainty. However, in situations in which one cannot specify (1) parameter values for input distributions, (2) precise probability distributions (shape), and (3) dependencies between input parameters, these methods have limitations and are found to be not effective. In order to address some of these limitations, the article presents uncertainty analysis in the context of level-1 probabilistic safety assessment (PSA) based on a probability bounds (PB) approach. PB analysis combines probability theory and interval arithmetic to produce probability boxes (p-boxes), structures that allow the comprehensive propagation through calculation in a rigorous way. A practical case study is also carried out with the developed code based on the PB approach and compared with the two-phase Monte Carlo simulation results.  相似文献   
83.
A framework is developed outlining how production knowledge and capabilities influence firm boundaries by impacting the transaction costs of markets and hierarchies. A central implication of the framework is that at lower levels of these capabilities the transaction costs of markets decline at a faster rate than the costs of hierarchy, while at higher levels of these capabilities the transaction costs of hierarchy decline at a faster rate than the costs of markets. The discriminating role of production capabilities arises because markets and hierarchies utilize different types of control (prices and output control versus authority and behavior control), and hence require different levels of knowledge to be efficient. The analysis suggests firms often maintain some production knowledge when contracting for various inputs since it not only reduces transactional hazards in markets, but also because in comparative institutional terms, initial gains in knowledge make markets more efficient than internal organization. In addition, the analysis suggests that there would be a U shaped relationship between the propensity to integrate vertically and the extent of production capabilities as opposed to a monotonically increasing relationship. I find support for the U shaped relationship in a cross sectional sample of 1553 manufacturing firms.  相似文献   
84.
Class-based storage policy distributes products among a number of classes and for each class it reserves a region within the storage area. The procedures reported in the literature for formation of storage classes primarily consider order-picking cost ignoring storage-space cost. Moreover, in these procedures items are ordered on the basis of their cube per order index (COI), and items are then partitioned into classes maintaining this ordering. This excludes many possible product combinations in forming classes which may result in inferior solutions. In this paper, a simulated annealing algorithm (SAA) is developed to solve an integer programming model for class formation and storage assignment that considers all possible product combinations, storage-space cost and order-picking cost. Computational experience on randomly generated data sets and an industrial case shows that SAA gives superior results than the benchmark dynamic programming algorithm for class formation with COI ordering restriction.  相似文献   
85.
The vast majority of research on self‐monitoring in the workplace focuses on the benefits that accrue to chameleon‐like high self‐monitors (relative to true‐to‐themselves low self‐monitors). In this study, we depart from the mainstream by focusing on a potential liability of being a high self‐monitor: high levels of experienced role conflict. We hypothesize that high self‐monitors tend to choose work situations that, although consistent with the expression of their characteristic personality, inherently involve greater role conflict (i.e. competing role expectations from different role senders). Data collected from a 116‐member high‐tech firm showed support for this mediation hypothesis: relative to low self‐monitors, high self‐monitors tended to experience greater role conflict in work organizations because high self‐monitors were more likely to occupy boundary spanning positions. To help draw a more realistic and balanced portrait of self‐monitoring in the workplace, we call for more theoretically grounded research on the price chameleons pay.  相似文献   
86.
87.
We consider the problem related to clustering of gamma-ray bursts (from “BATSE” catalogue) through kernel principal component analysis in which our proposed kernel outperforms results of other competent kernels in terms of clustering accuracy and we obtain three physically interpretable groups of gamma-ray bursts. The effectivity of the suggested kernel in combination with kernel principal component analysis in revealing natural clusters in noisy and nonlinear data while reducing the dimension of the data is also explored in two simulated data sets.  相似文献   
88.
In this paper, a general class of non parametric tests is proposed for the two-sample scale problem. Testing of the scale parameter is very useful in real-life situations commonly faced in engineering, trade, cultivation, industries, medicine, etc. In all these fields, one will prefer the method that gives more consistent results. Thus, it is worthwhile to test the equality of scale parameters. The distribution of the proposed test is established. To assess the performance of the proposed test, the asymptotic efficacies are studied for some underlying distributions and the results are interpreted with useful information. To see the working of the proposed test, an illustrative example for the real-life data set is provided. The simulation study is also carried out to find the asymptotic power of the proposed test. An extension of the general class of tests to the multiple-sample problem is also discussed.  相似文献   
89.
Particle filters are a powerful and flexible tool for performing inference on state-space models. They involve a collection of samples evolving over time through a combination of sampling and re-sampling steps. The re-sampling step is necessary to ensure that weight degeneracy is avoided. In several situations of statistical interest, it is important to be able to compare the estimates produced by two different particle filters; consequently, being able to efficiently couple two particle filter trajectories is often of paramount importance. In this text, we propose several ways to do so. In particular, we leverage ideas from the optimal transportation literature. In general, though computing the optimal transport map is extremely computationally expensive, to deal with this, we introduce computationally tractable approximations to optimal transport couplings. We demonstrate that our resulting algorithms for coupling two particle filter trajectories often perform orders of magnitude more efficiently than more standard approaches.  相似文献   
90.
Approximate Bayesian computation (ABC) is a popular approach to address inference problems where the likelihood function is intractable, or expensive to calculate. To improve over Markov chain Monte Carlo (MCMC) implementations of ABC, the use of sequential Monte Carlo (SMC) methods has recently been suggested. Most effective SMC algorithms that are currently available for ABC have a computational complexity that is quadratic in the number of Monte Carlo samples (Beaumont et al., Biometrika 86:983?C990, 2009; Peters et al., Technical report, 2008; Toni et al., J.?Roy. Soc. Interface 6:187?C202, 2009) and require the careful choice of simulation parameters. In this article an adaptive SMC algorithm is proposed which admits a computational complexity that is linear in the number of samples and adaptively determines the simulation parameters. We demonstrate our algorithm on a toy example and on a birth-death-mutation model arising in epidemiology.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号