全文获取类型
收费全文 | 19524篇 |
免费 | 471篇 |
国内免费 | 1篇 |
专业分类
管理学 | 2623篇 |
民族学 | 95篇 |
人才学 | 1篇 |
人口学 | 1691篇 |
丛书文集 | 98篇 |
教育普及 | 2篇 |
理论方法论 | 1774篇 |
综合类 | 398篇 |
社会学 | 9279篇 |
统计学 | 4035篇 |
出版年
2023年 | 102篇 |
2021年 | 116篇 |
2020年 | 293篇 |
2019年 | 386篇 |
2018年 | 427篇 |
2017年 | 625篇 |
2016年 | 465篇 |
2015年 | 353篇 |
2014年 | 441篇 |
2013年 | 3357篇 |
2012年 | 603篇 |
2011年 | 527篇 |
2010年 | 368篇 |
2009年 | 412篇 |
2008年 | 439篇 |
2007年 | 443篇 |
2006年 | 404篇 |
2005年 | 518篇 |
2004年 | 437篇 |
2003年 | 387篇 |
2002年 | 413篇 |
2001年 | 485篇 |
2000年 | 468篇 |
1999年 | 470篇 |
1998年 | 336篇 |
1997年 | 312篇 |
1996年 | 304篇 |
1995年 | 274篇 |
1994年 | 328篇 |
1993年 | 271篇 |
1992年 | 346篇 |
1991年 | 347篇 |
1990年 | 309篇 |
1989年 | 308篇 |
1988年 | 278篇 |
1987年 | 264篇 |
1986年 | 269篇 |
1985年 | 255篇 |
1984年 | 285篇 |
1983年 | 263篇 |
1982年 | 225篇 |
1981年 | 178篇 |
1980年 | 191篇 |
1979年 | 219篇 |
1978年 | 179篇 |
1977年 | 162篇 |
1976年 | 136篇 |
1975年 | 140篇 |
1974年 | 130篇 |
1973年 | 112篇 |
排序方式: 共有10000条查询结果,搜索用时 375 毫秒
951.
The use of covariates in block designs is necessary when the covariates cannot be controlled like the blocking factor in the experiment. In this paper, we consider the situation where there is some flexibility for selection in the values of the covariates. The choice of values of the covariates for a given block design attaining minimum variance for estimation of each of the parameters has attracted attention in recent times. Optimum covariate designs in simple set-ups such as completely randomised design (CRD), randomised block design (RBD) and some series of balanced incomplete block design (BIBD) have already been considered. In this paper, optimum covariate designs have been considered for the more complex set-ups of different partially balanced incomplete block (PBIB) designs, which are popular among practitioners. The optimum covariate designs depend much on the methods of construction of the basic PBIB designs. Different combinatorial arrangements and tools such as orthogonal arrays, Hadamard matrices and different kinds of products of matrices viz. Khatri–Rao product, Kronecker product have been conveniently used to construct optimum covariate designs with as many covariates as possible. 相似文献
952.
In quantitative trait linkage studies using experimental crosses, the conventional normal location-shift model or other parameterizations may be unnecessarily restrictive. We generalize the mapping problem to a genuine nonparametric setup and provide a robust estimation procedure for the situation where the underlying phenotype distributions are completely unspecified. Classical Wilcoxon–Mann–Whitney statistics are employed for point and interval estimation of QTL positions and effects. 相似文献
953.
A supersaturated design is a design whose run size is not enough for estimating all the main effects. It is commonly used in screening experiments, where the goals are to identify sparse and dominant active factors with low cost. In this paper, we study a variable selection method via the Dantzig selector, proposed by Candes and Tao [2007. The Dantzig selector: statistical estimation when p is much larger than n. Annals of Statistics 35, 2313–2351], to screen important effects. A graphical procedure and an automated procedure are suggested to accompany with the method. Simulation shows that this method performs well compared to existing methods in the literature and is more efficient at estimating the model size. 相似文献
954.
Aristidis K. Nikoloulopoulos Dimitris Karlis 《Journal of statistical planning and inference》2009,139(11):203
A new family of copulas is introduced that provides flexible dependence structure while being tractable and simple to use for multivariate discrete data modeling. The construction exploits finite mixtures of uncorrelated normal distributions. Accordingly, the cumulative distribution function is simply the product of univariate normal distributions. At the same time, however, the mixing operation introduces association. The properties of the new family of copulas are examined and a concrete application is used to show its applicability. 相似文献
955.
Most studies of quality improvement deal with ordered categorical data from industrial experiments. Accounting for the ordering of such data plays an important role in effectively determining the optimal factor level of combination. This paper utilizes the correspondence analysis to develop a procedure to improve the ordered categorical response in a multifactor state system based on Taguchi's statistic. Users may find the proposed procedure in this paper to be attractive because we suggest a simple and also popular statistical tool for graphically identifying the really important factors and determining the levels to improve process quality. A case study for optimizing the polysilicon deposition process in a very large-scale integrated circuit is provided to demonstrate the effectiveness of the proposed procedure. 相似文献
956.
Using relative utility curves to evaluate risk prediction 总被引:2,自引:0,他引:2
Stuart G. Baker Nancy R. Cook rew Vickers Barnett S. Kramer 《Journal of the Royal Statistical Society. Series A, (Statistics in Society)》2009,172(4):729-748
Summary. Because many medical decisions are based on risk prediction models that are constructed from medical history and results of tests, the evaluation of these prediction models is important. This paper makes five contributions to this evaluation: the relative utility curve which gauges the potential for better prediction in terms of utilities, without the need for a reference level for one utility, while providing a sensitivity analysis for misspecification of utilities, the relevant region, which is the set of values of prediction performance that are consistent with the recommended treatment status in the absence of prediction, the test threshold, which is the minimum number of tests that would be traded for a true positive prediction in order for the expected utility to be non-negative, the evaluation of two-stage predictions that reduce test costs and connections between various measures of performance of prediction. An application involving the risk of cardiovascular disease is discussed. 相似文献
957.
Plotting of log−log survival functions against time for different categories or combinations of categories of covariates is perhaps the easiest and most commonly used graphical tool for checking proportional hazards (PH) assumption. One problem in the utilization of the technique is that the covariates need to be categorical or made categorical through appropriate grouping of the continuous covariates. Subjectivity in the decision making on the basis of eye-judgment of the plots and frequent inconclusiveness arising in situations where the number of categories and/or covariates gets larger are among other limitations of this technique. This paper proposes a non-graphical (numerical) test of the PH assumption that makes use of log−log survival function. The test enables checking proportionality for categorical as well as continuous covariates and overcomes the other limitations of the graphical method. Observed power and size of the test are compared to some other tests of its kind through simulation experiments. Simulations demonstrate that the proposed test is more powerful than some of the most sensitive tests in the literature in a wide range of survival situations. An example of the test is given using the widely used gastric cancer data. 相似文献
958.
In some statistical problems a degree of explicit, prior information is available about the value taken by the parameter of interest, θ say, although the information is much less than would be needed to place a prior density on the parameter's distribution. Often the prior information takes the form of a simple bound, ‘θ > θ1 ’ or ‘θ < θ1 ’, where θ1 is determined by physical considerations or mathematical theory, such as positivity of a variance. A conventional approach to accommodating the requirement that θ > θ1 is to replace an estimator, , of θ by the maximum of and θ1. However, this technique is generally inadequate. For one thing, it does not respect the strictness of the inequality θ > θ1 , which can be critical in interpreting results. For another, it produces an estimator that does not respond in a natural way to perturbations of the data. In this paper we suggest an alternative approach, in which bootstrap aggregation, or bagging, is used to overcome these difficulties. Bagging gives estimators that, when subjected to the constraint θ > θ1 , strictly exceed θ1 except in extreme settings in which the empirical evidence strongly contradicts the constraint. Bagging also reduces estimator variability in the important case for which is close to θ1, and more generally produces estimators that respect the constraint in a smooth, realistic fashion. 相似文献
959.
Christopher R. Heathcote Borek D. Puza Steven P. Roberts 《Australian & New Zealand Journal of Statistics》2009,51(4):481-497
We consider two related aspects of the study of old‐age mortality. One is the estimation of a parameterized hazard function from grouped data, and the other is its possible deceleration at extreme old age owing to heterogeneity described by a mixture of distinct sub‐populations. The first is treated by half of a logistic transform, which is known to be free of discretization bias at older ages, and also preserves the increasing slope of the log hazard in the Gompertz case. It is assumed that data are available in the form published by official statistical agencies, that is, as aggregated frequencies in discrete time. Local polynomial modelling and weighted least squares are applied to cause‐of‐death mortality counts. The second, related, problem is to discover what conditions are necessary for population mortality to exhibit deceleration for a mixture of Gompertz sub‐populations. The general problem remains open but, in the case of three groups, we demonstrate that heterogeneity may be such that it is possible for a population to show decelerating mortality and then return to a Gompertz‐like increase at a later age. This implies that there are situations, depending on the extent of heterogeneity, in which there is at least one age interval in which the hazard function decreases before increasing again. 相似文献
960.
Several models for studies related to tensile strength of materials are proposed in the literature where the size or length
component has been taken to be an important factor for studying the specimens’ failure behaviour. An important model, developed
on the basis of cumulative damage approach, is the three-parameter extension of the Birnbaum–Saunders fatigue model that incorporates
size of the specimen as an additional variable. This model is a strong competitor of the commonly used Weibull model and stands
better than the traditional models, which do not incorporate the size effect. The paper considers two such cumulative damage
models, checks their compatibility with a real dataset, compares them with some of the recent toolkits, and finally recommends
a model, which appears an appropriate one. Throughout the study is Bayesian based on Markov chain Monte Carlo simulation. 相似文献