首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   7721篇
  免费   216篇
  国内免费   67篇
管理学   347篇
民族学   29篇
人才学   4篇
人口学   76篇
丛书文集   618篇
理论方法论   154篇
综合类   4951篇
社会学   156篇
统计学   1669篇
  2024年   15篇
  2023年   41篇
  2022年   47篇
  2021年   59篇
  2020年   84篇
  2019年   146篇
  2018年   157篇
  2017年   218篇
  2016年   159篇
  2015年   200篇
  2014年   352篇
  2013年   712篇
  2012年   502篇
  2011年   421篇
  2010年   392篇
  2009年   389篇
  2008年   436篇
  2007年   503篇
  2006年   529篇
  2005年   451篇
  2004年   406篇
  2003年   368篇
  2002年   305篇
  2001年   289篇
  2000年   186篇
  1999年   91篇
  1998年   86篇
  1997年   69篇
  1996年   53篇
  1995年   67篇
  1994年   55篇
  1993年   45篇
  1992年   37篇
  1991年   25篇
  1990年   21篇
  1989年   33篇
  1988年   20篇
  1987年   15篇
  1986年   6篇
  1985年   2篇
  1984年   1篇
  1983年   3篇
  1982年   2篇
  1981年   2篇
  1980年   2篇
  1979年   1篇
  1978年   1篇
排序方式: 共有8004条查询结果,搜索用时 15 毫秒
141.
Recently, many standard families of distributions have been generalized by exponentiating their cumulative distribution function (CDF). In this paper, test statistics are constructed based on CDF–transformed observations and the corresponding moments of arbitrary positive order. Simulation results for generalized exponential distributions show that the proposed test compares well with standard methods based on the empirical distribution function.  相似文献   
142.
美国统计学会"关于统计显著性与P值"的官方声明发布之后,再次引发国内外研究学者对P值的广泛关注。在介绍国内统计教材中假设检验的基本内容和步骤的基础上,以"硬币投掷"与"背影识人"为例直观性解释P值、统计显著性与统计功效等相关概念,并引用心理学统计经典调查案例分析P值被误读的原因。同时,基于美国统计学会的声明,给出正确使用P值的建议。  相似文献   
143.
The Best Worst Method (BWM) is a multi-criteria decision-making method that uses two vectors of pairwise comparisons to determine the weights of criteria. First, the best (e.g. most desirable, most important), and the worst (e.g. least desirable, least important) criteria are identified by the decision-maker, after which the best criterion is compared to the other criteria, and the other criteria to the worst criterion. A non-linear minmax model is then used to identify the weights such that the maximum absolute difference between the weight ratios and their corresponding comparisons is minimized. The minmax model may result in multiple optimal solutions. Although, in some cases, decision-makers prefer to have multiple optimal solutions, in other cases they prefer to have a unique solution. The aim of this paper is twofold: firstly, we propose using interval analysis for the case of multiple optimal solutions, in which we show how the criteria can be weighed and ranked. Secondly, we propose a linear model for BWM, which is based on the same philosophy, but yields a unique solution.  相似文献   
144.
The Theil, Pietra, Éltetö and Frigyes measures of income inequality associated with the Pareto distribution function are expressed in terms of parameters defining the Pareto distribution. Inference procedures based on the generalized variable method, the large sample method, and the Bayesian method for testing of, and constructing confidence interval for, these measures are discussed. The results of Monte Carlo study are used to compare the performance of the suggested inference procedures from a population characterized by a Pareto distribution.  相似文献   
145.
This paper presents a method for using end-to-end available bandwidth measurements in order to estimate available bandwidth on individual internal links. The basic approach is to use a power transform on the observed end-to-end measurements, model the result as a mixture of spatially correlated exponential random variables, carryout estimation by moment methods, then transform back to the original variables to get estimates and confidence intervals for the expected available bandwidth on each link. Because spatial dependence leads to certain parameter confounding, only upper bounds can be found reliably. Simulations with ns2 show that the method can work well and that the assumptions are approximately valid in the examples.  相似文献   
146.
Interval-valued variables have become very common in data analysis. Up until now, symbolic regression mostly approaches this type of data from an optimization point of view, considering neither the probabilistic aspects of the models nor the nonlinear relationships between the interval response and the interval predictors. In this article, we formulate interval-valued variables as bivariate random vectors and introduce the bivariate symbolic regression model based on the generalized linear models theory which provides much-needed exibility in practice. Important inferential aspects are investigated. Applications to synthetic and real data illustrate the usefulness of the proposed approach.  相似文献   
147.
This article analyses diffusion-type processes from a new point-of-view. Consider two statistical hypotheses on a diffusion process. We do not use a classical test to reject or accept one hypothesis using the Neyman–Pearson procedure and do not involve Bayesian approach. As an alternative, we propose using a likelihood paradigm to characterizing the statistical evidence in support of these hypotheses. The method is based on evidential inference introduced and described by Royall [Royall R. Statistical evidence: a likelihood paradigm. London: Chapman and Hall; 1997]. In this paper, we extend the theory of Royall to the case when data are observations from a diffusion-type process instead of iid observations. The empirical distribution of likelihood ratio is used to formulate the probability of strong, misleading and weak evidences. Since the strength of evidence can be affected by the sampling characteristics, we present a simulation study that demonstrates these effects. Also we try to control misleading evidence and reduce them by adjusting these characteristics. As an illustration, we apply the method to the Microsoft stock prices.  相似文献   
148.
Folded normal distribution originates from the modulus of normal distribution. In the present article, we have formulated the cumulative distribution function (cdf) of a folded normal distribution in terms of standard normal cdf and the parameters of the mother normal distribution. Although cdf values of folded normal distribution were earlier tabulated in the literature, we have shown that those values are valid for very particular situations. We have also provided a simple approach to obtain values of the parameters of the mother normal distribution from those of the folded normal distribution. These results find ample application in practice, for example, in obtaining the so-called upper and lower α-points of folded normal distribution, which, in turn, is useful in testing of the hypothesis relating to folded normal distribution and in designing process capability control chart of some process capability indices. A thorough study has been made to compare the performance of the newly developed theory to the existing ones. Some simulated as well as real-life examples have been discussed to supplement the theory developed in this article. Codes (generated by R software) for the theory developed in this article are also presented for the ease of application.  相似文献   
149.
Recently Beh and Farver investigated and evaluated three non‐iterative procedures for estimating the linear‐by‐linear parameter of an ordinal log‐linear model. The study demonstrated that these non‐iterative techniques provide estimates that are, for most types of contingency tables, statistically indistinguishable from estimates from Newton's unidimensional algorithm. Here we show how two of these techniques are related using the Box–Cox transformation. We also show that by using this transformation, accurate non‐iterative estimates are achievable even when a contingency table contains sampling zeros.  相似文献   
150.
In this work, we discuss the class of bilinear GARCH (BL-GARCH) models that are capable of capturing simultaneously two key properties of non-linear time series: volatility clustering and leverage effects. It has often been observed that the marginal distributions of such time series have heavy tails; thus we examine the BL-GARCH model in a general setting under some non-normal distributions. We investigate some probabilistic properties of this model and we conduct a Monte Carlo experiment to evaluate the small-sample performance of the maximum likelihood estimation (MLE) methodology for various models. Finally, within-sample estimation properties were studied using S&P 500 daily returns, when the features of interest manifest as volatility clustering and leverage effects. The main results suggest that the Student-t BL-GARCH seems highly appropriate to describe the S&P 500 daily returns.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号