首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   16234篇
  免费   471篇
  国内免费   209篇
管理学   1033篇
劳动科学   2篇
民族学   196篇
人才学   1篇
人口学   192篇
丛书文集   1947篇
理论方法论   802篇
综合类   10800篇
社会学   1399篇
统计学   542篇
  2024年   34篇
  2023年   136篇
  2022年   156篇
  2021年   173篇
  2020年   245篇
  2019年   299篇
  2018年   293篇
  2017年   328篇
  2016年   317篇
  2015年   393篇
  2014年   817篇
  2013年   1124篇
  2012年   983篇
  2011年   1125篇
  2010年   895篇
  2009年   895篇
  2008年   1026篇
  2007年   1177篇
  2006年   1065篇
  2005年   1027篇
  2004年   1009篇
  2003年   949篇
  2002年   813篇
  2001年   641篇
  2000年   378篇
  1999年   163篇
  1998年   81篇
  1997年   59篇
  1996年   50篇
  1995年   49篇
  1994年   34篇
  1993年   30篇
  1992年   32篇
  1991年   27篇
  1990年   20篇
  1989年   23篇
  1988年   12篇
  1987年   11篇
  1986年   3篇
  1985年   6篇
  1984年   2篇
  1983年   5篇
  1982年   2篇
  1981年   2篇
  1979年   1篇
  1978年   1篇
  1975年   3篇
排序方式: 共有10000条查询结果,搜索用时 0 毫秒
61.
The most popular approach in extreme value statistics is the modelling of threshold exceedances using the asymptotically motivated generalised Pareto distribution. This approach involves the selection of a high threshold above which the model fits the data well. Sometimes, few observations of a measurement process might be recorded in applications and so selecting a high quantile of the sample as the threshold leads to almost no exceedances. In this paper we propose extensions of the generalised Pareto distribution that incorporate an additional shape parameter while keeping the tail behaviour unaffected. The inclusion of this parameter offers additional structure for the main body of the distribution, improves the stability of the modified scale, tail index and return level estimates to threshold choice and allows a lower threshold to be selected. We illustrate the benefits of the proposed models with a simulation study and two case studies.  相似文献   
62.
We propose an efficient group sequential monitoring rule for clinical trials. At each interim analysis both efficacy and futility are evaluated through a specified loss structure together with the predicted power. The proposed design is robust to a wide range of priors, and achieves the specified power with a saving of sample size compared to existing adaptive designs. A method is also proposed to obtain a reduced-bias estimator of treatment difference for the proposed design. The new approaches hold great potential for efficiently selecting a more effective treatment in comparative trials. Operating characteristics are evaluated and compared with other group sequential designs in empirical studies. An example is provided to illustrate the application of the method.  相似文献   
63.
In this article statistical inference is viewed as information processing involving input information and output information. After introducing information measures for the input and output information, an information criterion functional is formulated and optimized to obtain an optimal information processing rule (IPR). For the particular information measures and criterion functional adopted, it is shown that Bayes's theorem is the optimal IPR. This optimal IPR is shown to be 100% efficient in the sense that its use leads to the output information being exactly equal to the given input information. Also, the analysis links Bayes's theorem to maximum-entropy considerations.  相似文献   
64.
A. Ferreira  ?  L. de Haan  L. Peng? 《Statistics》2013,47(5):401-434
One of the major aims of one-dimensional extreme-value theory is to estimate quantiles outside the sample or at the boundary of the sample. The underlying idea of any method to do this is to estimate a quantile well inside the sample but near the boundary and then to shift it somehow to the right place. The choice of this “anchor quantile” plays a major role in the accuracy of the method. We present a bootstrap method to achieve the optimal choice of sample fraction in the estimation of either high quantile or endpoint estimation which extends earlier results by Hall and Weissman (1997) in the case of high quantile estimation. We give detailed results for the estimators used by Dekkers et al. (1989). An alternative way of attacking problems like this one is given in a paper by Drees and Kaufmann (1998).  相似文献   
65.
Egmar Rödel 《Statistics》2013,47(4):573-585
Normed bivariate density funtions were introduced by HOEFFDING (1940/41). In the present paper estimators for normed bivariate ranks and on a FOURIER series expansion in LEGENDRE polynomials. The estimation of normed bivarate density functions under positive dependence is also described  相似文献   
66.
This article develops a new cumulative sum statistic to identify aberrant behavior in a sequentially administered multiple-choice standardized examination. The examination responses can be described as finite Poisson trials, and the statistic can be used for other applications which fit this framework. The standardized examination setting uses a maximum likelihood estimate of examinee ability and an item response theory model. Aberrant and non aberrant probabilities are computed by an odds ratio analogous to risk adjusted CUSUM schemes. The significance level of a hypothesis test, where the null hypothesis is non-aberrant examinee behavior, is computed with Markov chains. A smoothing process is used to spread probabilities across the Markov states. The practicality of the approach to detect aberrant examinee behavior is demonstrated with results from both simulated and empirical data.  相似文献   
67.
The cost and time consumption of many industrial experimentations can be reduced using the class of supersaturated designs since this can be used for screening out the important factors from a large set of potentially active variables. A supersaturated design is a design for which there are fewer runs than effects to be estimated. Although there exists a wide study of construction methods for supersaturated designs, their analysis methods are yet in an early research stage. In this article, we propose a method for analyzing data using a correlation-based measure, named as symmetrical uncertainty. This method combines measures from the information theory field and is used as the main idea of variable selection algorithms developed in data mining. In this work, the symmetrical uncertainty is used from another viewpoint in order to determine more directly the important factors. The specific method enables us to use supersaturated designs for analyzing data of generalized linear models for a Bernoulli response. We evaluate our method by using some of the existing supersaturated designs, obtained according to methods proposed by Tang and Wu (1997 Tang , B. , Wu , C. F. J. (1997). A method for constructing supersaturated designs and its E(s 2)-optimality. Canadian Journal of Statistics 25:191201.[Crossref], [Web of Science ®] [Google Scholar]) as well as by Koukouvinos et al. (2008 Koukouvinos , C. , Mylona , K. , Simos , D. E. ( 2008 ). E(s 2)-optimal and minimax-optimal cyclic supersaturated designs via multi-objective simulated annealing . Journal of Statistical Planning and Inference 138 : 16391646 .[Crossref], [Web of Science ®] [Google Scholar]). The comparison is performed by some simulating experiments and the Type I and Type II error rates are calculated. Additionally, Receiver Operating Characteristics (ROC) curves methodology is applied as an additional statistical tool for performance evaluation.  相似文献   
68.
We demonstrate a multidimensional approach for combining several indicators of well-being, including the traditional money-income indicators. This methodology avoids the difficult and much criticized task of computing imputed incomes for such indicators as net worth and schooling. Inequality in the proposed composite measures is computed using relative inequality indexes that permit simple analysis of both the contribution of each welfare indicator (and its factor components) and within and between components of total inequality when the population is grouped by income levels, age, gender, or any other criteria. The analysis is performed on U.S. data using the Michigan Survey of Income Dynamics.  相似文献   
69.
This article presents some results showing how rectangular probabilities can be studied using copula theory. These results lead us to develop new lower and upper bounds for rectangular probabilities which can be computed efficiently. The new bounds are compared with the ones obtained from the generalized Fréchet–Hoeffding bounds and Bonferroni-type inequalities.  相似文献   
70.
ABSTRACT

Harter (1979) summarized applications of order statistics to multivariate analysis up through 1949. The present paper covers the period 1950–1959. References in the two papers were selected from the first and second volumes, respectively, of the author's chronological annotated bibliography on order statistics [Harter (1978, 1983)]. Tintner (1950a) established formal relations between four special types of multivariate analysis: (1) canonical correlation, (2) principal components, (3) weighted regression, and (4) discriminant analysis, all of which depend on ordered roots of determinantal equations. During the decade 1950–1959, numerous authors contributed to distribution theory and/or computational methods for ordered roots and their applications to multivariate analysis. Test criteria for (i) multivariate analysis of variance, (ii) comparison of variance–covariance matrices, and (iii) multiple independence of groups of variates when the parent population is multivariate normal were usually derived from the likelihood ratio principle until S. N. Roy (1953) formulated the union–intersection principles on which Roy & Bose (1953) based their simultaneous test and confidence procedure. Roy & Bargmann (1958) used an alternative procedure, called the step–down procedure, in deriving a test for problem (iii), and J. Roy (1958) applied the step–down procedure to problem (i) and (ii), Various authors developed and applied distribution theory for several multivariate distributions. Advances were also made on multivariate tolerance regions [Fraser & Wormleighton (1951), Fraser (1951, 1953), Fraser & Guttman (1956), Kemperman (1956), and Somerville (1958)], a criterion for rejection of multivariate outliers [Kudô (1957)], and linear estimators, from censored samples, of parameters of multivariate normal populations [Watterson (1958, 1959)]. Textbooks on multivariate analysis were published by Kendall (1957) and Anderson (1958), as well as a monograph by Roy (1957) and a book of tables by Pillai (1957).  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号