首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   370篇
  免费   11篇
  国内免费   2篇
管理学   143篇
民族学   4篇
人口学   6篇
丛书文集   9篇
理论方法论   6篇
综合类   91篇
社会学   12篇
统计学   112篇
  2024年   1篇
  2023年   2篇
  2022年   3篇
  2021年   3篇
  2020年   1篇
  2019年   9篇
  2018年   7篇
  2017年   16篇
  2016年   5篇
  2015年   5篇
  2014年   4篇
  2013年   35篇
  2012年   21篇
  2011年   12篇
  2010年   20篇
  2009年   23篇
  2008年   27篇
  2007年   22篇
  2006年   21篇
  2005年   15篇
  2004年   13篇
  2003年   23篇
  2002年   16篇
  2001年   16篇
  2000年   19篇
  1999年   10篇
  1998年   3篇
  1997年   7篇
  1996年   3篇
  1995年   4篇
  1994年   3篇
  1993年   5篇
  1992年   1篇
  1991年   1篇
  1990年   3篇
  1989年   1篇
  1987年   3篇
排序方式: 共有383条查询结果,搜索用时 15 毫秒
11.
针对由一个制造工厂和多个区域服务中心组成的服务型制造企业,研究了考虑生产时间和服务时间均具有随机性且工期可指派的产品服务系统(PSS)订单调度问题。首先以最小化订单提前、误工和工期指派费用的期望总额为目标构建问题的优化模型,然后分析目标函数近似值的最优性条件,据此提出加权最短平均生产时间排序规则,并结合该规则与插入邻域局部搜索设计了启发式算法对问题进行求解,最后通过数值仿真验证算法的可行性和有效性。研究表明,提前费用偏差对PSS订单调度与工期指派决策的影响很小,因此企业管理者无需准确估计库存费用也能制定出比较有效的PSS订单调度策略;而工期指派费用偏差对决策结果的影响非常大,因此企业管理者在决策时必须谨慎估计该项费用。  相似文献   
12.
This study develops a robust automatic algorithm for clustering probability density functions based on the previous research. Unlike other existing methods that often pre-determine the number of clusters, this method can self-organize data groups based on the original data structure. The proposed clustering method is also robust in regards to noise. Three examples of synthetic data and a real-world COREL dataset are utilized to illustrate the accurateness and effectiveness of the proposed approach.  相似文献   
13.
A number of efficient computer codes are available for the simple linear L 1 regression problem. However, a number of these codes can be made more efficient by utilizing the least squares solution. In fact, a couple of available computer programs already do so.

We report the results of a computational study comparing several openly available computer programs for solving the simple linear L 1 regression problem with and without computing and utilizing a least squares solution.  相似文献   
14.
Tree algorithms are a well-known class of random access algorithms with a provable maximum stable throughput under the infinite population model (as opposed to ALOHA or the binary exponential backoff algorithm). In this article, we propose a tree algorithm for opportunistic spectrum usage in cognitive radio networks. A channel in such a network is shared among so-called primary and secondary users, where the secondary users are allowed to use the channel only if there is no primary user activity. The tree algorithm designed in this article can be used by the secondary users to share the channel capacity left by the primary users.

We analyze the maximum stable throughput and mean packet delay of the secondary users by developing a tree structured Quasi-Birth Death Markov chain under the assumption that the primary user activity can be modeled by means of a finite state Markov chain and that packets lengths follow a discrete phase-type distribution.

Numerical experiments provide insight on the effect of various system parameters and indicate that the proposed algorithm is able to make good use of the bandwidth left by the primary users.  相似文献   

15.
Abstract. Use of auxiliary variables for generating proposal variables within a Metropolis–Hastings setting has been suggested in many different settings. This has in particular been of interest for simulation from complex distributions such as multimodal distributions or in transdimensional approaches. For many of these approaches, the acceptance probabilities that are used turn up somewhat magic and different proofs for their validity have been given in each case. In this article, we will present a general framework for construction of acceptance probabilities in auxiliary variable proposal generation. In addition to showing the similarities between many of the proposed algorithms in the literature, the framework also demonstrates that there is a great flexibility in how to construct acceptance probabilities. With this flexibility, alternative acceptance probabilities are suggested. Some numerical experiments are also reported.  相似文献   
16.
The objective of this article was to propose an exposure assessment model to describe the relationship between fish consumption and body methyl mercury (MeHg) levels in the Japanese population. Individual MeHg intake was estimated by the summation of species-specific fish consumption multiplied by species-specific fish MeHg levels. The distribution of fish consumed by individuals and the MeHg level in each fish species were assigned based on published data from Japanese government institutions. The probability of MeHg intake for a population was accomplished through a Monte Carlo simulation by the random sampling of fish consumption and species-specific MeHg levels. Internal body MeHg levels in blood and hair were estimated using a one-compartment model. Overall, the mean value of MeHg intake for the Japanese population was estimated to be 6.76 μg/day or 0.14 μg/kg body weight per day (bw/day), while the mean value for the hair mercury level was 2.02 μg/g. Compared with the survey data that tabulated hair mercury levels in a cross-section of the Japanese population, the simulation results matched the hair mercury survey data very well for women, but somewhat underestimated for men and all of the population. This exposure assessment model is a useful attempt at further risk assessment with respect to a risk-benefit analysis.  相似文献   
17.
粗集与神经网络相结合的股票价格预测模型   总被引:6,自引:4,他引:6  
粗集和神经网络结合反映了人类智能的定性和定量、清晰和隐含、串行和并行相互交叉混合的常规思维机理。本文建立这样一种混合杂交模型用于股票价格波动趋势的预测,通过粗集对数据的二维约简预处理消除了样本中的噪声和冗余,在提高神经网络预测精度的同时降低了学习负担。为了获得最优的预测精度,本文还利用遗传算法进行属性离散化和网络学习。通过对上证综指的实证研究表明,这种混合杂交模型的性能明显优于BP和GA神经网络模型。  相似文献   
18.
We investigate the computational complexity of two special cases of the Steiner tree problem where the distance matrix is a Kalmanson matrix or a circulant matrix, respectively. For Kalmanson matrices we develop an efficient polynomial time algorithm that is based on dynamic programming. For circulant matrices we give an -hardness proof and thus establish computational intractability.  相似文献   
19.
Twenty-four-hour recall data from the Continuing Survey of Food Intake by Individuals (CSFII) are frequently used to estimate dietary exposure for risk assessment. Food frequency questionnaires are traditional instruments of epidemiological research; however, their application in dietary exposure and risk assessment has been limited. This article presents a probabilistic method of bridging the National Health and Nutrition Examination Survey (NHANES) food frequency and the CSFII data to estimate longitudinal (usual) intake, using a case study of seafood mercury exposures for two population subgroups (females 16 to 49 years and children 1 to 5 years). Two hundred forty-nine CSFII food codes were mapped into 28 NHANES fish/shellfish categories. FDA and state/local seafood mercury data were used. A uniform distribution with minimum and maximum blood-diet ratios of 0.66 to 1.07 was assumed. A probabilistic assessment was conducted to estimate distributions of individual 30-day average daily fish/shellfish intakes, methyl mercury exposure, and blood levels. The upper percentile estimates of fish and shellfish intakes based on the 30-day daily averages were lower than those based on two- and three-day daily averages. These results support previous findings that distributions of "usual" intakes based on a small number of consumption days provide overestimates in the upper percentiles. About 10% of the females (16 to 49 years) and children (1 to 5 years) may be exposed to mercury levels above the EPA's RfD. The predicted 75th and 90th percentile blood mercury levels for the females in the 16-to-49-year group were similar to those reported by NHANES. The predicted 90th percentile blood mercury levels for children in the 1-to-5-year subgroup was similar to NHANES and the 75th percentile estimates were slightly above the NHANES.  相似文献   
20.
We study a variant of classical scheduling, which is called scheduling with “end of sequence” information. It is known in advance that the last job has the longest processing time. Moreover, the last job is marked, and thus it is known for every new job whether it is the final job of the sequence. We explore this model on two uniformly related machines, that is, two machines with possibly different speeds. Two objectives are considered, maximizing the minimum completion time and minimizing the maximum completion time (makespan). Let s be the speed ratio between the two machines, we consider the competitive ratios which are possible to achieve for the two problems as functions of s. We present algorithms for different values of s and lower bounds on the competitive ratio. The proposed algorithms are best possible for a wide range of values of s. For the overall competitive ratio, we show tight bounds of ϕ + 1 ≈ 2.618 for the first problem, and upper and lower bounds of 1.5 and 1.46557 for the second problem. The authors would like to dedicate this paper to the memory of our colleague and friend Yong He who passed away in August 2005 after struggling with illness. D. Ye: Research was supported in part by NSFC (10601048).  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号