全文获取类型
收费全文 | 6231篇 |
免费 | 205篇 |
专业分类
管理学 | 886篇 |
民族学 | 36篇 |
人才学 | 2篇 |
人口学 | 610篇 |
丛书文集 | 48篇 |
理论方法论 | 767篇 |
综合类 | 54篇 |
社会学 | 3300篇 |
统计学 | 733篇 |
出版年
2023年 | 31篇 |
2021年 | 26篇 |
2020年 | 111篇 |
2019年 | 131篇 |
2018年 | 121篇 |
2017年 | 178篇 |
2016年 | 179篇 |
2015年 | 124篇 |
2014年 | 172篇 |
2013年 | 1054篇 |
2012年 | 188篇 |
2011年 | 184篇 |
2010年 | 151篇 |
2009年 | 158篇 |
2008年 | 188篇 |
2007年 | 184篇 |
2006年 | 166篇 |
2005年 | 211篇 |
2004年 | 233篇 |
2003年 | 185篇 |
2002年 | 207篇 |
2001年 | 155篇 |
2000年 | 120篇 |
1999年 | 134篇 |
1998年 | 118篇 |
1997年 | 102篇 |
1996年 | 95篇 |
1995年 | 99篇 |
1994年 | 95篇 |
1993年 | 104篇 |
1992年 | 79篇 |
1991年 | 89篇 |
1990年 | 61篇 |
1989年 | 65篇 |
1988年 | 71篇 |
1987年 | 58篇 |
1986年 | 51篇 |
1985年 | 62篇 |
1984年 | 59篇 |
1983年 | 58篇 |
1982年 | 61篇 |
1981年 | 49篇 |
1980年 | 48篇 |
1979年 | 47篇 |
1978年 | 44篇 |
1977年 | 45篇 |
1976年 | 46篇 |
1975年 | 44篇 |
1974年 | 33篇 |
1973年 | 29篇 |
排序方式: 共有6436条查询结果,搜索用时 7 毫秒
901.
John R. Koza 《Statistics and Computing》1994,4(2):87-112
Many seemingly different problems in machine learning, artificial intelligence, and symbolic processing can be viewed as requiring the discovery of a computer program that produces some desired output for particular inputs. When viewed in this way, the process of solving these problems becomes equivalent to searching a space of possible computer programs for a highly fit individual computer program. The recently developed genetic programming paradigm described herein provides a way to search the space of possible computer programs for a highly fit individual computer program to solve (or approximately solve) a surprising variety of different problems from different fields. In genetic programming, populations of computer programs are genetically bred using the Darwinian principle of survival of the fittest and using a genetic crossover (sexual recombination) operator appropriate for genetically mating computer programs. Genetic programming is illustrated via an example of machine learning of the Boolean 11-multiplexer function and symbolic regression of the econometric exchange equation from noisy empirical data.Hierarchical automatic function definition enables genetic programming to define potentially useful functions automatically and dynamically during a run, much as a human programmer writing a complex computer program creates subroutines (procedures, functions) to perform groups of steps which must be performed with different instantiations of the dummy variables (formal parameters) in more than one place in the main program. Hierarchical automatic function definition is illustrated via the machine learning of the Boolean 11-parity function. 相似文献
902.
The contract manufacturing industry has grown rapidly in recent years as firms have increasingly outsourced production to reduce costs. This growth has created powerful contract manufacturers (CMs) in several industries. Achieving a competitive cost position is often a primary motive for outsourcing. Outsourcing influences both the original equipment manufacturer's (OEM) and the CM's production levels, and, therefore, through learning‐by‐doing renders future costs dependent on past outsourcing decisions. As such, outsourcing should not be viewed as a static decision that, once made, is not revisited. We address these considerations by analyzing a two‐period game between an OEM and a powerful CM wherein both firms can reduce their production costs through learning‐by‐doing. We find that partial outsourcing, wherein the OEM simultaneously outsources and produces in‐house, can be an optimal strategy. Also, we find that the OEM's outsourcing strategy may be dynamic—i.e., change from period to period. In addition, we find both that the OEM may engage in production for leverage (i.e., produce internally when at a cost disadvantage) and that the CM may engage in low balling. These and other findings in this paper demonstrate the importance of considering learning, the power of the CM, and future periods when making outsourcing decisions. 相似文献
903.
Wilks’ ratio statistic can be defined in terms of the ratio of the sample generalized variances of two non-independent estimators of the same covariance matrix. Recently this statistic has been proposed as a control statistic for monitoring changes in the covariance matrix of a multivariate normal process in a Phase II situation, particularly when the dimension is larger than the sample size. In this article we derive a technique for decomposing Wilks’ ratio statistic into the product of independent factors that can be associated with the components of the covariance matrix. With these results, we demonstrate that, when a signal is detected in a control procedure for the Phase II monitoring of process variability using the ratio statistic, the signaling value can be decomposed and the process variables contributing to the signal can be specifically identified. 相似文献
904.
An internal pilot with interim analysis (IPIA) design combines interim power analysis (an internal pilot) with interim data analysis (two-stage group sequential). We provide IPIA methods for single df hypotheses within the Gaussian general linear model, including one and two group t tests. The design allows early stopping for efficacy and futility while also re-estimating sample size based on an interim variance estimate. Study planning in small samples requires the exact and computable forms reported here. The formulation gives fast and accurate calculations of power, Type I error rate, and expected sample size. 相似文献
905.
A control procedure is presented in this article that is based on jointly using two separate control statistics in the detection and interpretation of signals in a multivariate normal process. The procedure detects the following three situations: (i) a mean vector shift without a shift in the covariance matrix; (ii) a shift in process variation (covariance matrix) without a mean vector shift; and (iii) both a simultaneous shift in the mean vector and covariance matrix as the result of a change in the parameters of some key process variables. It is shown that, following the occurrence of a signal on either of the separate control charts, the values from both of the corresponding signaling statistics can be decomposed into interpretable elements. Viewing the two decompositions together helps one to specifically identify the individual components and associated variables that are being affected. These components may include individual means or variances of the process variables as well as the correlations between or among variables. An industrial data set is used to illustrate the procedure. 相似文献
906.
Statements that are inherently multiplicative have historically been justified using ratios of random variables. Although recent work on ratios has extended the classical theory to produce confidence bounds conditioned on a positive denominator, this current article offers a novel perspective that eliminates the need for such a condition. Although seemingly trivial, this new perspective leads to improved lower confidence bounds to support multiplicative statements. This perspective is also more satisfying as it allows comparisons that are inherently multiplicative in nature to be properly analyzed as such. 相似文献
907.
John Tuhao Chen 《统计学通讯:理论与方法》2013,42(11):3397-3409
ABSTRACTHolm's step-down testing procedure starts with the smallest p-value and sequentially screens larger p-values without any information on confidence intervals. This article changes the conventional step-down testing framework by presenting a nonparametric procedure that starts with the largest p-value and sequentially screens smaller p-values in a step-by-step manner to construct a set of simultaneous confidence sets. We use a partitioning approach to prove that the new procedure controls the simultaneous confidence level (thus strongly controlling the familywise error rate). Discernible features of the new stepwise procedure include consistency with individual inference, coherence, and confidence estimations for follow-up investigations. In a simple simulation study, the proposed procedure (treated as a testing procedure), is more powerful than Holm's procedure when the correlation coefficient is large, and vice versa when it is small. In the data analysis of a medical study, the new procedure is able to detect the efficacy of Aspirin as a cardiovascular prophylaxis in a nonparametric setting. 相似文献
908.
John P. Small 《统计学通讯:理论与方法》2013,42(9):3907-3916
This paper considers the point optimal tests for AR(1) errors in the linear regression model. It is shown that these tests have the same limiting power characteristics as the Durbin-Watson test. . The limiting power is zero or one when the regression has no intercept, but lies strictly between these values when an intercept is included. 相似文献
909.
A discrete distribution in which the probabilities are expressible as Laguerre polynomials is formulated in terms of a probability generating function involving three parameters. The skewness and kurtosis is given for members of the family corresponding to various parameter values. Several estimators of the parameters are proposed, including some based on minimum chi-square. All the estimators are compared on the basis of asymptotic relative efficiency. 相似文献
910.
D.S. St John S.P. Bailey W.H. Fellner J.M. Minor R.D. Snee E.I. du Pont de 《统计学通讯:理论与方法》2013,42(12):1293-1333
Time series analyses of monthly average total ozone measured at 37 stations throughout the world were used to estimate the extent to which the average ozone trend correlates with the depletion curve hypothesized as due to chlorofluorocarbons (CFCs). Statistical characteristics of stations in the ensemble were used to help define appropriate model and station selection criteria. The maximum likelihood procedure developed herein estimates the weighted average trend, its. variance, and the intra- and inter-station variance components of the trend. Correlations among trends at different stations are also taken into account. The models were subjected to much checking and criticism. Variations in statistical methodology are used to show that the results are insensitive to details of the model selection criteria. The method does not discriminate well between the hypothesized CFC trend and a linear trend. The trend estimates represent the sum of all long-term global effects. The variance includes all effects that differ from station-to-station. The estimated trend and 2α limits for 14 stations with 20-year records (1958-79) is an ozone increase through 1979 of (1.5+1.0) percent. At the 23 stations with shorter records, the trend is (1.0=1.7) percent. It is concluded that no significant depletion in stratospheric ozone has occurred from any cause through the end of 1979. 相似文献