首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   38823篇
  免费   562篇
  国内免费   1篇
管理学   5454篇
民族学   231篇
人才学   1篇
人口学   4839篇
丛书文集   129篇
教育普及   2篇
理论方法论   3073篇
现状及发展   1篇
综合类   569篇
社会学   18401篇
统计学   6686篇
  2023年   195篇
  2022年   137篇
  2021年   176篇
  2020年   481篇
  2019年   696篇
  2018年   2316篇
  2017年   2589篇
  2016年   1780篇
  2015年   612篇
  2014年   821篇
  2013年   5226篇
  2012年   1328篇
  2011年   1947篇
  2010年   1659篇
  2009年   1330篇
  2008年   1491篇
  2007年   1617篇
  2006年   726篇
  2005年   836篇
  2004年   858篇
  2003年   728篇
  2002年   662篇
  2001年   718篇
  2000年   666篇
  1999年   627篇
  1998年   467篇
  1997年   411篇
  1996年   449篇
  1995年   431篇
  1994年   358篇
  1993年   390篇
  1992年   431篇
  1991年   421篇
  1990年   426篇
  1989年   403篇
  1988年   388篇
  1987年   320篇
  1986年   361篇
  1985年   393篇
  1984年   357篇
  1983年   333篇
  1982年   265篇
  1981年   221篇
  1980年   247篇
  1979年   296篇
  1978年   212篇
  1977年   197篇
  1976年   171篇
  1975年   165篇
  1974年   162篇
排序方式: 共有10000条查询结果,搜索用时 15 毫秒
761.
This article is addressed to those interested in how Bayesian approaches can be brought to bear on research and development planning and management issues. It provides a conceptual framework for estimating the value of information to environmental policy decisions. The methodology is applied to assess the expected value of research concerning the effects of acidic deposition on forests. To calculate the expected value of research requires modeling the possible actions of policymakers under conditions of uncertainty. Information is potentially valuable only if it leads to actions that differ from the actions that would be taken without the information. The relevant issue is how research on forest effects would change choices of emissions controls from those that would be made in the absence of such research. The approach taken is to model information with a likelihood function embedded in a decision tree describing possible policy options. The value of information is then calculated as a function of information accuracy. The results illustrate how accurate the information must be to have an impact on the choice of policy options. The results also illustrate situations in which additional research can have a negative value.  相似文献   
762.
Time series sometimes consist of count data in which the number of events occurring in a given time interval is recorded. Such data are necessarily nonnegative integers, and an assumption of a Poisson or negative binomial distribution is often appropriate. This article sets ups a model in which the level of the process generating the observations changes over time. A recursion analogous to the Kalman filter is used to construct the likelihood function and to make predictions of future observations. Qualitative variables, based on a binomial or multinomial distribution, may be handled in a similar way. The model for count data may be extended to include explanatory variables. This enables nonstochastic slope and seasonal components to be included in the model, as well as permitting intervention analysis. The techniques are illustrated with a number of applications, and an attempt is made to develop a model-selection strategy along the lines of that used for Gaussian structural time series models. The applications include an analysis of the results of international football matches played between England and Scotland and an assessment of the effect of the British seat-belt law on the drivers of light-goods vehicles.  相似文献   
763.
In this paper we provide a comprehensive Bayesian posterior analysis of trend determination in general autoregressive models. Multiple lag autoregressive models with fitted drifts and time trends as well as models that allow for certain types of structural change in the deterministic components are considered. We utilize a modified information matrix-based prior that accommodates stochastic nonstationarity, takes into account the interactions between long-run and short-run dynamics and controls the degree of stochastic nonstationarity permitted. We derive analytic posterior densities for all of the trend determining parameters via the Laplace approximation to multivariate integrals. We also address the sampling properties of our posteriors under alternative data generating processes by simulation methods. We apply our Bayesian techniques to the Nelson-Plosser macroeconomic data and various stock price and dividend data. Contrary to DeJong and Whiteman (1989a,b,c), we do not find that the data overwhelmingly favor the existence of deterministic trends over stochastic trends. In addition, we find evidence supporting Perron's (1989) view that some of the Nelson and Plosser data are best construed as trend stationary with a change in the trend function occurring at 1929.  相似文献   
764.
In this paper, we study the effect of estimating the vector of means and the variance–covariance matrix on the performance of two of the most widely used multivariate cumulative sum (CUSUM) control charts, the MCUSUM chart proposed by Crosier [Multivariate generalizations of cumulative sum quality-control schemes, Technometrics 30 (1988), pp. 291–303] and the MC1 chart proposed by Pignatiello and Runger [Comparisons of multivariate CUSUM charts, J. Qual. Technol. 22 (1990), pp. 173–186]. Using simulation, we investigate and compare the in-control and out-of-control performances of the competing charts in terms of the average run length measure. The in-control and out-of-control performances of the competing charts deteriorate significantly if the estimated parameters are used with control limits intended for known parameters, especially when only a few Phase I samples are used to estimate the parameters. We recommend the use of the MC1 chart over that of the MCUSUM chart if the parameters are estimated from a small number of Phase I samples.  相似文献   
765.
We propose some statistical tools for diagnosing the class of generalized Weibull linear regression models [A.A. Prudente and G.M. Cordeiro, Generalized Weibull linear models, Comm. Statist. Theory Methods 39 (2010), pp. 3739–3755]. This class of models is an alternative means of analysing positive, continuous and skewed data and, due to its statistical properties, is very competitive with gamma regression models. First, we show that the Weibull model induces ma-ximum likelihood estimators asymptotically more efficient than the gamma model. Standardized residuals are defined, and their statistical properties are examined empirically. Some measures are derived based on the case-deletion model, including the generalized Cook's distance and measures for identifying influential observations on partial F-tests. The results of a simulation study conducted to assess behaviour of the global influence approach are also presented. Further, we perform a local influence analysis under the case-weights, response and explanatory variables perturbation schemes. The Weibull, gamma and other Weibull-type regression models are fitted into three data sets to illustrate the proposed diagnostic tools. Statistical analyses indicate that the Weibull model fitted into these data yields better fits than other common alternative models.  相似文献   
766.
767.
In robust parameter design, variance effects and mean effects in a factorial experiment are modelled simultaneously. If variance effects are present in a model, correlations are induced among the naive estimators of the mean effects. A simple normal quantile plot of the mean effects may be misleading because the mean effects are no longer iid under the null hypothesis that they are zero. Adjusted quantiles are computed for the case when one variance effect is significant and examples of 8-run and 16-run fractional factorial designs are examined in detail. We find that the usual normal quantiles are similar to adjusted quantiles for all but the largest and smallest ordered effects for which they are conservative. Graphically, the qualitative difference between the two sets of quantiles is negligible (even in the presence of large variance effects) and we conclude that normal probability plots are robust in the presence of variance effects.  相似文献   
768.
ABSTRACT

The one-sample Wilcoxon signed rank test was originally designed to test for a specified median, under the assumption that the distribution is symmetric, but it can also serve as a test for symmetry if the median is known. In this article we derive the Wilcoxon statistic as the first component of Pearson's X 2 statistic for independence in a particularly constructed contingency table. The second and third components are new test statistics for symmetry. In the second part of the article, the Wilcoxon test is extended so that symmetry around the median and symmetry in the tails can be examined seperately. A trimming proportion is used to split the observations in the tails from those around the median. We further extend the method so that no arbitrary choice for the trimming proportion has to be made. Finally, the new tests are compared to other tests for symmetry in a simulation study. It is concluded that our tests often have substantially greater powers than most other tests.  相似文献   
769.
In this paper, methods are proposed in finding the robust design in both Taguchi and Standard setups when a signal factor is present. The robust design is a set of level combinations of control factors so that the effect of controllable noise factors on response is minimum. Both univariate and multivariate methods are used in finding the influential noise factors for the determination of robust designs.  相似文献   
770.
A Bayesian estimator based on Franklin's randomized response procedure is proposed for proportion estimation in surveys dealing with a sensitive character. The method is simple to implement and avoids the usual drawbacks of Franklin's estimator, i.e., the occurrence of negative estimates when the population proportion is small. A simulation study is considered in order to assess the performance of the proposed estimator as well as the corresponding credible interval.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号