首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   5817篇
  免费   115篇
管理学   801篇
民族学   29篇
人口学   584篇
丛书文集   22篇
教育普及   1篇
理论方法论   553篇
综合类   65篇
社会学   2737篇
统计学   1140篇
  2023年   44篇
  2022年   33篇
  2021年   57篇
  2020年   112篇
  2019年   143篇
  2018年   157篇
  2017年   230篇
  2016年   180篇
  2015年   98篇
  2014年   171篇
  2013年   870篇
  2012年   192篇
  2011年   182篇
  2010年   148篇
  2009年   139篇
  2008年   161篇
  2007年   149篇
  2006年   172篇
  2005年   126篇
  2004年   138篇
  2003年   86篇
  2002年   106篇
  2001年   140篇
  2000年   130篇
  1999年   144篇
  1998年   109篇
  1997年   94篇
  1996年   98篇
  1995年   73篇
  1994年   62篇
  1993年   89篇
  1992年   94篇
  1991年   102篇
  1990年   82篇
  1989年   81篇
  1988年   64篇
  1987年   72篇
  1986年   66篇
  1985年   80篇
  1984年   70篇
  1983年   72篇
  1982年   47篇
  1981年   43篇
  1980年   38篇
  1979年   47篇
  1978年   39篇
  1977年   34篇
  1976年   36篇
  1975年   34篇
  1974年   40篇
排序方式: 共有5932条查询结果,搜索用时 0 毫秒
111.
The importance of statistically designed experiments in industry has been well recognized. However, the use of 'design of experiments' is still not pervasive, owing in part to the inefficient learning process experienced by many non-statisticians. In this paper, the nature of design of experiments, in contrast to the usual statistical process control techniques, is discussed. It is then pointed out that for design of experiments to be appreciated and applied, appropriate approaches should be taken in training, learning and application. Perspectives based on the concepts of objective setting and design under constraints can be used to facilitate the experimenters' formulation of plans for collection, analysis and interpretation of empirical information. A review is made of the expanding role of design of experiments in the past several decades, with comparisons made of the various formats and contexts of experimental design applications, such as Taguchi methods and Six Sigma. The trend of development shows that, from the realm of scientific research to business improvement, the competitive advantage offered by design of experiments is being increasingly felt.  相似文献   
112.
Boosting is a new, powerful method for classification. It is an iterative procedure which successively classifies a weighted version of the sample, and then reweights this sample dependent on how successful the classification was. In this paper we review some of the commonly used methods for performing boosting and show how they can be fit into a Bayesian setup at each iteration of the algorithm. We demonstrate how this formulation gives rise to a new splitting criterion when using a domain-partitioning classification method such as a decision tree. Further we can improve the predictive performance of simple decision trees, known as stumps, by using a posterior weighted average of them to classify at each step of the algorithm, rather than just a single stump. The main advantage of this approach is to reduce the number of boosting iterations required to produce a good classifier with only a minimal increase in the computational complexity of the algorithm.  相似文献   
113.
We discuss in the present paper the analysis of heteroscedastic regression models and their applications to off-line quality control problems. It is well known that the method of pseudo-likelihood is usually preferred to full maximum likelihood since estimators of the parameters in the regression function obtained are more robust to misspecification of the variance function. Despite its popularity, however, existing theoretical results are difficult to apply and are of limited use in many applications. Using more recent results in estimating equations, we obtain an efficient algorithm for computing the pseudo-likelihood estimator with desirable convergence properties and also derive simple, explicit and easy to apply asymptotic results. These results are used to look in detail at variance minimization in off-line quality control, yielding techniques of inferences for the optimized design parameter. In application of some existing approaches to off-line quality control, such as the dual response methodology, rigorous statistical inference techniques are scarce and difficult to obtain. An example of off-line quality control is presented to discuss the practical aspects involved in the application of the results obtained and to address issues such as data transformation, model building and the optimization of design parameters. The analysis shows very encouraging results, and is seen to be able to unveil some important information not found in previous analyses.  相似文献   
114.
Kernel-based density estimation algorithms are inefficient in presence of discontinuities at support endpoints. This is substantially due to the fact that classic kernel density estimators lead to positive estimates beyond the endopoints. If a nonparametric estimate of a density functional is required in determining the bandwidth, then the problem also affects the bandwidth selection procedure. In this paper algorithms for bandwidth selection and kernel density estimation are proposed for non-negative random variables. Furthermore, the methods we propose are compared with some of the principal solutions in the literature through a simulation study.  相似文献   
115.
Kontkanen  P.  Myllymäki  P.  Silander  T.  Tirri  H.  Grünwald  P. 《Statistics and Computing》2000,10(1):39-54
In this paper we are interested in discrete prediction problems for a decision-theoretic setting, where the task is to compute the predictive distribution for a finite set of possible alternatives. This question is first addressed in a general Bayesian framework, where we consider a set of probability distributions defined by some parametric model class. Given a prior distribution on the model parameters and a set of sample data, one possible approach for determining a predictive distribution is to fix the parameters to the instantiation with the maximum a posteriori probability. A more accurate predictive distribution can be obtained by computing the evidence (marginal likelihood), i.e., the integral over all the individual parameter instantiations. As an alternative to these two approaches, we demonstrate how to use Rissanen's new definition of stochastic complexity for determining predictive distributions, and show how the evidence predictive distribution with Jeffrey's prior approaches the new stochastic complexity predictive distribution in the limit with increasing amount of sample data. To compare the alternative approaches in practice, each of the predictive distributions discussed is instantiated in the Bayesian network model family case. In particular, to determine Jeffrey's prior for this model family, we show how to compute the (expected) Fisher information matrix for a fixed but arbitrary Bayesian network structure. In the empirical part of the paper the predictive distributions are compared by using the simple tree-structured Naive Bayes model, which is used in the experiments for computational reasons. The experimentation with several public domain classification datasets suggest that the evidence approach produces the most accurate predictions in the log-score sense. The evidence-based methods are also quite robust in the sense that they predict surprisingly well even when only a small fraction of the full training set is used.  相似文献   
116.
A problem arising from the study of the spread of a viral infection among potato plants by aphids appears to involve a mixture of two linear regressions on a single predictor variable. The plant scientists studying the problem were particularly interested in obtaining a 95% confidence upper bound for the infection rate. We discuss briefly the procedure for fitting mixtures of regression models by means of maximum likelihood, effected via the EM algorithm. We give general expressions for the implementation of the M-step and then address the issue of conducting statistical inference in this context. A technique due to T. A. Louis may be used to estimate the covariance matrix of the parameter estimates by calculating the observed Fisher information matrix. We develop general expressions for the entries of this information matrix. Having the complete covariance matrix permits the calculation of confidence and prediction bands for the fitted model. We also investigate the testing of hypotheses concerning the number of components in the mixture via parametric and 'semiparametric' bootstrapping. Finally, we develop a method of producing diagnostic plots of the residuals from a mixture of linear regressions.  相似文献   
117.
Factors influencing Soay sheep survival   总被引:4,自引:0,他引:4  
We present a survival analysis of Soay sheep mark recapture and recovery data. Unlike previous conditional analyses, it is not necessary to assume equality of recovery and recapture probabilities; instead these are estimated by maximum likelihood. Male and female sheep are treated separately, with the higher numbers and survival probabilities of the females resulting in a more complex model than that used for the males. In both cases, however, age and time aspects need to be included and there is a strong indication of a reduction in survival for sheep aged 7 years or more. Time variation in survival is related to the size of the population and selected weather variables, by using logistic regression. The size of the population significantly affects the survival probabilities of male and female lambs, and of female sheep aged 7 or more years. March rainfall and a measure of the North Atlantic oscillation are found to influence survival significantly for all age groups considered, for both males and females. Either of these weather variables can be used in a model. Several phenotypic and genotypic individual covariates are also fitted. The only covariate which is found to influence survival significantly is the type of horn of first-year female sheep. There is a substantial variation in the recovery probabilities over time, reflecting in part the increased effort when a population crash was expected. The goodness of fit of the model is checked by using graphical procedures.  相似文献   
118.
A problem of estimating the integral of a squared regression function and of its squared derivatives has been addressed in a number of papers. For the case of a heteroscedastic model where smoothness of the underlying regression function, the design density, and the variance of errors are known, the asymptotically sharp minimax lower bound and a sharp estimator were found in Pastuchova & Khasminski (1989). However, there are apparently no results on the either rate optimal or sharp optimal adaptive, or data-driven, estimation when neither the degree of regression function smoothness nor design density, scale function and distribution of errors are known. After a brief review of main developments in non-parametric estimation of non-linear functionals, we suggest a simple adaptive estimator for the integral of a squared regression function and its derivatives and prove that it is sharp-optimal whenever the estimated derivative is sufficiently smooth.  相似文献   
119.
In the estimators t 3 , t 4 , t 5 of Mukerjee, Rao & Vijayan (1987), b y x and b y z are partial regression coefficients of y on x and z , respectively, based on the smaller sample. With the above interpretation of b y x and b y z in t 3 , t 4 , t 5 , all the calculations in Mukerjee at al. (1987) are correct. In this connection, we also wish to make it explicit that b x z in t 5 is an ordinary and not a partial regression coefficient. The 'corrected' MSEs of t 3 , t 4 , t 5 , as given in Ahmed (1998 Section 3) are computed assuming that our b y x and b y z are ordinary and not partial regression coefficients. Indeed, we had no intention of giving estimators using the corresponding ordinary regression coefficients which would lead to estimators inferior to those given by Kiregyera (1984). We accept responsibility for any notational confusion created by us and express regret to readers who have been confused by our notation. Finally, in consideration of the above, it may be noted that Tripathi & Ahmed's (1995) estimator t 0 , quoted also in Ahmed (1998), is no better than t 5 of Mukerjee at al. (1987).  相似文献   
120.
The estimated effect of any factor can be highly dependent on both the model and the data used for the analyses. This article presents an example of the estimated effect of one factor in two different data sets under three different forms of the standard linear model using the effect of track placement on achievement as an example. Some relative advantages and disadvantages of each model are considered. The analyses demonstrate that, given collinearity among the predictor variables, a model with a poorer statistical fit may be useful for some interpretive purposes.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号