全文获取类型
收费全文 | 6372篇 |
免费 | 119篇 |
专业分类
管理学 | 917篇 |
民族学 | 34篇 |
人口学 | 614篇 |
丛书文集 | 25篇 |
教育普及 | 1篇 |
理论方法论 | 638篇 |
综合类 | 64篇 |
社会学 | 2920篇 |
统计学 | 1278篇 |
出版年
2023年 | 38篇 |
2021年 | 56篇 |
2020年 | 102篇 |
2019年 | 151篇 |
2018年 | 163篇 |
2017年 | 222篇 |
2016年 | 161篇 |
2015年 | 106篇 |
2014年 | 163篇 |
2013年 | 1035篇 |
2012年 | 194篇 |
2011年 | 185篇 |
2010年 | 154篇 |
2009年 | 140篇 |
2008年 | 166篇 |
2007年 | 158篇 |
2006年 | 171篇 |
2005年 | 145篇 |
2004年 | 139篇 |
2003年 | 93篇 |
2002年 | 135篇 |
2001年 | 153篇 |
2000年 | 144篇 |
1999年 | 158篇 |
1998年 | 119篇 |
1997年 | 106篇 |
1996年 | 111篇 |
1995年 | 78篇 |
1994年 | 70篇 |
1993年 | 105篇 |
1992年 | 102篇 |
1991年 | 111篇 |
1990年 | 90篇 |
1989年 | 92篇 |
1988年 | 74篇 |
1987年 | 85篇 |
1986年 | 72篇 |
1985年 | 93篇 |
1984年 | 83篇 |
1983年 | 83篇 |
1982年 | 57篇 |
1981年 | 58篇 |
1980年 | 42篇 |
1979年 | 56篇 |
1978年 | 49篇 |
1977年 | 40篇 |
1976年 | 47篇 |
1975年 | 45篇 |
1974年 | 47篇 |
1973年 | 34篇 |
排序方式: 共有6491条查询结果,搜索用时 15 毫秒
131.
George Tzavelas 《Australian & New Zealand Journal of Statistics》1999,41(4):431-438
This paper characterizes the family of Normal distributions within the class of exponential families of distributions, via the structure of the bias of the maximum likelihood estimator Θ n of the canonical parameter Θ . More specifically, when E θ ( Θ n ) – Θ = (1/ n ) Q ( Θ ) + o (1/ n ), the equality Q ( Θ ) = 0 proves to be a property of the Normal distribution only. The same conclusion is obtained for the one-dimensional case bt assuming that Q ( Θ ) is a polynomial of Θ . 相似文献
132.
The estimated effect of any factor can be highly dependent on both the model and the data used for the analyses. This article presents an example of the estimated effect of one factor in two different data sets under three different forms of the standard linear model using the effect of track placement on achievement as an example. Some relative advantages and disadvantages of each model are considered. The analyses demonstrate that, given collinearity among the predictor variables, a model with a poorer statistical fit may be useful for some interpretive purposes. 相似文献
133.
T. N. Goh 《Journal of applied statistics》2001,28(3):391-398
The importance of statistically designed experiments in industry has been well recognized. However, the use of 'design of experiments' is still not pervasive, owing in part to the inefficient learning process experienced by many non-statisticians. In this paper, the nature of design of experiments, in contrast to the usual statistical process control techniques, is discussed. It is then pointed out that for design of experiments to be appreciated and applied, appropriate approaches should be taken in training, learning and application. Perspectives based on the concepts of objective setting and design under constraints can be used to facilitate the experimenters' formulation of plans for collection, analysis and interpretation of empirical information. A review is made of the expanding role of design of experiments in the past several decades, with comparisons made of the various formats and contexts of experimental design applications, such as Taguchi methods and Six Sigma. The trend of development shows that, from the realm of scientific research to business improvement, the competitive advantage offered by design of experiments is being increasingly felt. 相似文献
134.
David G. T. Denison 《Statistics and Computing》2001,11(2):171-178
Boosting is a new, powerful method for classification. It is an iterative procedure which successively classifies a weighted version of the sample, and then reweights this sample dependent on how successful the classification was. In this paper we review some of the commonly used methods for performing boosting and show how they can be fit into a Bayesian setup at each iteration of the algorithm. We demonstrate how this formulation gives rise to a new splitting criterion when using a domain-partitioning classification method such as a decision tree. Further we can improve the predictive performance of simple decision trees, known as stumps, by using a posterior weighted average of them to classify at each step of the algorithm, rather than just a single stump. The main advantage of this approach is to reduce the number of boosting iterations required to produce a good classifier with only a minimal increase in the computational complexity of the algorithm. 相似文献
135.
We discuss in the present paper the analysis of heteroscedastic regression models and their applications to off-line quality control problems. It is well known that the method of pseudo-likelihood is usually preferred to full maximum likelihood since estimators of the parameters in the regression function obtained are more robust to misspecification of the variance function. Despite its popularity, however, existing theoretical results are difficult to apply and are of limited use in many applications. Using more recent results in estimating equations, we obtain an efficient algorithm for computing the pseudo-likelihood estimator with desirable convergence properties and also derive simple, explicit and easy to apply asymptotic results. These results are used to look in detail at variance minimization in off-line quality control, yielding techniques of inferences for the optimized design parameter. In application of some existing approaches to off-line quality control, such as the dual response methodology, rigorous statistical inference techniques are scarce and difficult to obtain. An example of off-line quality control is presented to discuss the practical aspects involved in the application of the results obtained and to address issues such as data transformation, model building and the optimization of design parameters. The analysis shows very encouraging results, and is seen to be able to unveil some important information not found in previous analyses. 相似文献
136.
Kernel-based density estimation algorithms are inefficient in presence of discontinuities at support endpoints. This is substantially due to the fact that classic kernel density estimators lead to positive estimates beyond the endopoints. If a nonparametric estimate of a density functional is required in determining the bandwidth, then the problem also affects the bandwidth selection procedure. In this paper algorithms for bandwidth selection and kernel density estimation are proposed for non-negative random variables. Furthermore, the methods we propose are compared with some of the principal solutions in the literature through a simulation study. 相似文献
137.
Doyo G. Enki Nickolay T. Trendafilov Ian T. Jolliffe 《Journal of applied statistics》2013,40(3):583-599
A new method for constructing interpretable principal components is proposed. The method first clusters the variables, and then interpretable (sparse) components are constructed from the correlation matrices of the clustered variables. For the first step of the method, a new weighted-variances method for clustering variables is proposed. It reflects the nature of the problem that the interpretable components should maximize the explained variance and thus provide sparse dimension reduction. An important feature of the new clustering procedure is that the optimal number of clusters (and components) can be determined in a non-subjective manner. The new method is illustrated using well-known simulated and real data sets. It clearly outperforms many existing methods for sparse principal component analysis in terms of both explained variance and sparseness. 相似文献
138.
We consider Markov-dependent binary sequences and study various types of success runs (overlapping, non-overlapping, exact, etc.) by examining additive functionals based on state visits and transitions in an appropriate Markov chain. We establish a multivariate Central Limit Theorem for the number of these types of runs and obtain its covariance matrix by means of the recurrent potential matrix of the Markov chain. Explicit expressions for the covariance matrix are given in the Bernoulli and a simple Markov-dependent case by expressing the recurrent potential matrix in terms of the stationary distribution and the mean transition times in the chain. We also obtain a multivariate Central Limit Theorem for the joint number of non-overlapping runs of various sizes and give its covariance matrix in explicit form for Markov dependent trials. 相似文献
139.
Nickolay T. Trendafilov Steffen Unkel Wojtek Krzanowski 《Statistics and Computing》2013,23(2):209-220
Exploratory Factor Analysis (EFA) and Principal Component Analysis (PCA) are popular techniques for simplifying the presentation of, and investigating the structure of, an (n×p) data matrix. However, these fundamentally different techniques are frequently confused, and the differences between them are obscured, because they give similar results in some practical cases. We therefore investigate conditions under which they are expected to be close to each other, by considering EFA as a matrix decomposition so that it can be directly compared with the data matrix decomposition underlying PCA. Correspondingly, we propose an extended version of PCA, called the EFA-like PCA, which mimics the EFA matrix decomposition in the sense that they contain the same unknowns. We provide iterative algorithms for estimating the EFA-like PCA parameters, and derive conditions that have to be satisfied for the two techniques to give similar results. Throughout, we consider separately the cases n>p and p≥n. All derived algorithms and matrix conditions are illustrated on two data sets, one for each of these two cases. 相似文献
140.