首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   928篇
  免费   22篇
管理学   160篇
民族学   4篇
人才学   1篇
人口学   53篇
丛书文集   9篇
理论方法论   149篇
综合类   2篇
社会学   469篇
统计学   103篇
  2023年   5篇
  2021年   4篇
  2020年   18篇
  2019年   9篇
  2018年   12篇
  2017年   28篇
  2016年   27篇
  2015年   19篇
  2014年   20篇
  2013年   147篇
  2012年   37篇
  2011年   23篇
  2010年   31篇
  2009年   22篇
  2008年   30篇
  2007年   26篇
  2006年   42篇
  2005年   27篇
  2004年   35篇
  2003年   36篇
  2002年   26篇
  2001年   13篇
  2000年   19篇
  1999年   19篇
  1998年   20篇
  1997年   15篇
  1996年   18篇
  1995年   12篇
  1994年   20篇
  1993年   21篇
  1992年   17篇
  1991年   5篇
  1990年   14篇
  1989年   13篇
  1988年   12篇
  1987年   11篇
  1986年   6篇
  1985年   12篇
  1984年   10篇
  1983年   12篇
  1982年   3篇
  1981年   12篇
  1980年   4篇
  1979年   8篇
  1978年   3篇
  1977年   6篇
  1976年   6篇
  1975年   3篇
  1974年   2篇
  1973年   5篇
排序方式: 共有950条查询结果,搜索用时 15 毫秒
21.
Effective production scheduling requires consideration of the dynamics and unpredictability of the manufacturing environment. An automated learning scheme, utilizing genetic search, is proposed for adaptive control in typical decentralized factory-floor decision making. A high-level knowledge representation for modeling production environments is developed, with facilities for genetic learning within this scheme. A multiagent framework is used, with individual agents being responsible for the dispatch decision making at different workstations. Learning is with respect to stated objectives, and given the diversity of scheduling goals, the efficacy of the designed learning scheme is judged through its response under different objectives. The behavior of the genetic learning scheme is analyzed and simulation studies help compare how learning under different objectives impacts certain aggregate measures of system performance.  相似文献   
22.
We present a novel methodology for a comprehensive statistical analysis of approximately periodic biosignal data. There are two main challenges in such analysis: (1) the automatic extraction (segmentation) of cycles from long, cyclostationary biosignals and (2) the subsequent statistical analysis, which in many cases involves the separation of temporal and amplitude variabilities. The proposed framework provides a principled approach for statistical analysis of such signals, which in turn allows for an efficient cycle segmentation algorithm. This is achieved using a convenient representation of functions called the square-root velocity function (SRVF). The segmented cycles, represented by SRVFs, are temporally aligned using the notion of the Karcher mean, which in turn allows for more efficient statistical summaries of signals. We show the strengths of this method through various disease classification experiments. In the case of myocardial infarction detection and localization, we show that our method compares favorably to methods described in the current literature.  相似文献   
23.
This article gives a method for obtaining accurate (5 decimal places) estimates of nine common cumulative distributions. Starting with a positive series expansion, we use the common ratio of each term to the preceding term and proceed as with a geometric series (the ratio may involve the term number). This avoids calculating terms in the the numerator or denominator which can be large enough to overflow or small enough to underflow the machine. The method is fast because it eliminates the necessity of calculating each term of the series in its entirety.  相似文献   
24.
When incomplete repeated failure times are collected from a large number of independent individuals, interest is focused primarily on the consistent and efficient estimation of the effects of the associated covariates on the failure times. Since repeated failure times are likely to be correlated, it is important to exploit the correlation structure of the failure data in order to obtain such consistent and efficient estimates. However, it may be difficult to specify an appropriate correlation structure for a real life data set. We propose a robust correlation structure that can be used irrespective of the true correlation structure. This structure is used in constructing an estimating equation for the hazard ratio parameter, under the assumption that the number of repeated failure times for an individual is random. The consistency and efficiency of the estimates is examined through a simulation study, where we consider failure times that marginally follow an exponential distribution and a Poisson distribution is assumed for the random number of repeated failure times. We conclude by using the proposed method to analyze a bladder cancer dataset.  相似文献   
25.
The interpretation of Cpk:, a common measure of process capability and confidence limits for it, is based on the assumption that the process is normally distributed. The non-parametric but computer intensive method called Bootstrap is introduced and three Bootstrap confidence interval estimates for C^ are defined. An initial simulation of two processes (one normal and the other highly skewed) is presented and discussed  相似文献   
26.
Recently, several new applications of control chart procedures for short production runs have been introduced. Bothe (1989) and Burr (1989) proposed the use of control chart statistics which are obtained by scaling the quality characteristic by target values or process estimates of a location and scale parameter. The performance of these control charts can be significantly affected by the use of incorrect scaling parameters, resulting in either an excessive "false alarm rate," or insensitivity to the detection of moderate shifts in the process. To correct for these deficiencies, Quesenberry (1990, 1991) has developed the Q-Chart which is formed from running process estimates of the sample mean and variance. For the case where both the process mean and variance are unknown, the Q-chaxt statistic is formed from the standard inverse Z-transformation of a t-statistic. Q-charts do not perform correctly, however, in the presence of special cause disturbances at process startup. This has recently been supported by results published by Del Castillo and Montgomery (1992), who recommend the use of an alternative control chart procedure which is based upon a first-order adaptive Kalman filter model Consistent with the recommendations by Castillo and Montgomery, we propose an alternative short run control chart procedure which is based upon the second order dynamic linear model (DLM). The control chart is shown to be useful for the early detection of unwanted process trends. Model and control chart parameters are updated sequentially in a Bayesian estimation framework, providing the greatest degree of flexibility in the level of prior information which is incorporated into the model. The result is a weighted moving average control chart statistic which can be used to provide running estimates of process capability. The average run length performance of the control chart is compared to the optimal performance of the exponentially weighted moving average (EWMA) chart, as reported by Gan (1991). Using a simulation approach, the second order DLM control chart is shown to provide better overall performance than the EWMA for short production run applications  相似文献   
27.
A complete two-way cross-classification design is not practical in many settings. For example, in a toxicological study where 30 male rats are mated with 30 female rats and each mating outcome (successful or unsuccessful)is observed, time and resource considerations can make the use of the complete design prohibitively costly. Partially structured variations of this design are, therefore, of interest (e.g., the balanced disjoint rectangle design, the fully diagonal design, and the "S"-design). Methodology for analyzing binary data from such incomplete designs is illustrated with an example. This methodology, which is based on infinite population sampling arguments, allows the estimation of the mean response, among-row correlation coefficient, among-column correlation coefficient, and the within-cell correlation coefficient as well as their standard errors.  相似文献   
28.
29.
This article introduces a new model of trend inflation. In contrast to many earlier approaches, which allow for trend inflation to evolve according to a random walk, ours is a bounded model which ensures that trend inflation is constrained to lie in an interval. The bounds of this interval can either be fixed or estimated from the data. Our model also allows for a time-varying degree of persistence in the transitory component of inflation. In an empirical exercise with CPI inflation, we find the model to work well, yielding more sensible measures of trend inflation and forecasting better than popular alternatives such as the unobserved components stochastic volatility model. This article has supplementary materials online.  相似文献   
30.
A comparison between the two-sample t test and Satterthwaite's approximate F test is made, assuming the choice between these two tests is based on a preliminary test on the variances. Exact formulas for the sizes and powers of the tests are derived. Sizes and powers are then calculated and compared for several situations.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号