全文获取类型
收费全文 | 959篇 |
免费 | 22篇 |
专业分类
管理学 | 171篇 |
民族学 | 4篇 |
人才学 | 1篇 |
人口学 | 53篇 |
丛书文集 | 10篇 |
理论方法论 | 153篇 |
综合类 | 4篇 |
社会学 | 482篇 |
统计学 | 103篇 |
出版年
2023年 | 6篇 |
2021年 | 4篇 |
2020年 | 19篇 |
2019年 | 10篇 |
2018年 | 13篇 |
2017年 | 29篇 |
2016年 | 27篇 |
2015年 | 19篇 |
2014年 | 21篇 |
2013年 | 151篇 |
2012年 | 40篇 |
2011年 | 30篇 |
2010年 | 33篇 |
2009年 | 22篇 |
2008年 | 31篇 |
2007年 | 26篇 |
2006年 | 43篇 |
2005年 | 27篇 |
2004年 | 35篇 |
2003年 | 37篇 |
2002年 | 26篇 |
2001年 | 14篇 |
2000年 | 19篇 |
1999年 | 19篇 |
1998年 | 21篇 |
1997年 | 15篇 |
1996年 | 18篇 |
1995年 | 12篇 |
1994年 | 20篇 |
1993年 | 21篇 |
1992年 | 17篇 |
1991年 | 6篇 |
1990年 | 14篇 |
1989年 | 13篇 |
1988年 | 12篇 |
1987年 | 11篇 |
1986年 | 6篇 |
1985年 | 12篇 |
1984年 | 11篇 |
1983年 | 12篇 |
1982年 | 3篇 |
1981年 | 12篇 |
1980年 | 4篇 |
1979年 | 8篇 |
1978年 | 3篇 |
1977年 | 6篇 |
1976年 | 6篇 |
1975年 | 3篇 |
1973年 | 5篇 |
1971年 | 3篇 |
排序方式: 共有981条查询结果,搜索用时 15 毫秒
21.
–Gary Hoppenstand 《Journal of popular culture》2011,44(5):909-910
22.
Gary Hoppenstand 《Journal of popular culture》2011,44(4):679-680
23.
Effective production scheduling requires consideration of the dynamics and unpredictability of the manufacturing environment. An automated learning scheme, utilizing genetic search, is proposed for adaptive control in typical decentralized factory-floor decision making. A high-level knowledge representation for modeling production environments is developed, with facilities for genetic learning within this scheme. A multiagent framework is used, with individual agents being responsible for the dispatch decision making at different workstations. Learning is with respect to stated objectives, and given the diversity of scheduling goals, the efficacy of the designed learning scheme is judged through its response under different objectives. The behavior of the genetic learning scheme is analyzed and simulation studies help compare how learning under different objectives impacts certain aggregate measures of system performance. 相似文献
24.
Sebastian Kurtek Wei Wu Gary E. Christensen Anuj Srivastava 《Journal of applied statistics》2013,40(6):1270-1288
We present a novel methodology for a comprehensive statistical analysis of approximately periodic biosignal data. There are two main challenges in such analysis: (1) the automatic extraction (segmentation) of cycles from long, cyclostationary biosignals and (2) the subsequent statistical analysis, which in many cases involves the separation of temporal and amplitude variabilities. The proposed framework provides a principled approach for statistical analysis of such signals, which in turn allows for an efficient cycle segmentation algorithm. This is achieved using a convenient representation of functions called the square-root velocity function (SRVF). The segmented cycles, represented by SRVFs, are temporally aligned using the notion of the Karcher mean, which in turn allows for more efficient statistical summaries of signals. We show the strengths of this method through various disease classification experiments. In the case of myocardial infarction detection and localization, we show that our method compares favorably to methods described in the current literature. 相似文献
25.
Gary Tietjen 《The American statistician》2013,67(3):263-265
This article gives a method for obtaining accurate (5 decimal places) estimates of nine common cumulative distributions. Starting with a positive series expansion, we use the common ratio of each term to the preceding term and proceed as with a geometric series (the ratio may involve the term number). This avoids calculating terms in the the numerator or denominator which can be large enough to overflow or small enough to underflow the machine. The method is fast because it eliminates the necessity of calculating each term of the series in its entirety. 相似文献
26.
When incomplete repeated failure times are collected from a large number of independent individuals, interest is focused primarily on the consistent and efficient estimation of the effects of the associated covariates on the failure times. Since repeated failure times are likely to be correlated, it is important to exploit the correlation structure of the failure data in order to obtain such consistent and efficient estimates. However, it may be difficult to specify an appropriate correlation structure for a real life data set. We propose a robust correlation structure that can be used irrespective of the true correlation structure. This structure is used in constructing an estimating equation for the hazard ratio parameter, under the assumption that the number of repeated failure times for an individual is random. The consistency and efficiency of the estimates is examined through a simulation study, where we consider failure times that marginally follow an exponential distribution and a Poisson distribution is assumed for the random number of repeated failure times. We conclude by using the proposed method to analyze a bladder cancer dataset. 相似文献
27.
The interpretation of Cpk:, a common measure of process capability and confidence limits for it, is based on the assumption that the process is normally distributed. The non-parametric but computer intensive method called Bootstrap is introduced and three Bootstrap confidence interval estimates for C^ are defined. An initial simulation of two processes (one normal and the other highly skewed) is presented and discussed 相似文献
28.
Recently, several new applications of control chart procedures for short production runs have been introduced. Bothe (1989) and Burr (1989) proposed the use of control chart statistics which are obtained by scaling the quality characteristic by target values or process estimates of a location and scale parameter. The performance of these control charts can be significantly affected by the use of incorrect scaling parameters, resulting in either an excessive "false alarm rate," or insensitivity to the detection of moderate shifts in the process. To correct for these deficiencies, Quesenberry (1990, 1991) has developed the Q-Chart which is formed from running process estimates of the sample mean and variance. For the case where both the process mean and variance are unknown, the Q-chaxt statistic is formed from the standard inverse Z-transformation of a t-statistic. Q-charts do not perform correctly, however, in the presence of special cause disturbances at process startup. This has recently been supported by results published by Del Castillo and Montgomery (1992), who recommend the use of an alternative control chart procedure which is based upon a first-order adaptive Kalman filter model Consistent with the recommendations by Castillo and Montgomery, we propose an alternative short run control chart procedure which is based upon the second order dynamic linear model (DLM). The control chart is shown to be useful for the early detection of unwanted process trends. Model and control chart parameters are updated sequentially in a Bayesian estimation framework, providing the greatest degree of flexibility in the level of prior information which is incorporated into the model. The result is a weighted moving average control chart statistic which can be used to provide running estimates of process capability. The average run length performance of the control chart is compared to the optimal performance of the exponentially weighted moving average (EWMA) chart, as reported by Gan (1991). Using a simulation approach, the second order DLM control chart is shown to provide better overall performance than the EWMA for short production run applications 相似文献
29.
A complete two-way cross-classification design is not practical in many settings. For example, in a toxicological study where 30 male rats are mated with 30 female rats and each mating outcome (successful or unsuccessful)is observed, time and resource considerations can make the use of the complete design prohibitively costly. Partially structured variations of this design are, therefore, of interest (e.g., the balanced disjoint rectangle design, the fully diagonal design, and the "S"-design). Methodology for analyzing binary data from such incomplete designs is illustrated with an example. This methodology, which is based on infinite population sampling arguments, allows the estimation of the mean response, among-row correlation coefficient, among-column correlation coefficient, and the within-cell correlation coefficient as well as their standard errors. 相似文献
30.