首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   199篇
  免费   2篇
管理学   9篇
人口学   2篇
丛书文集   2篇
理论方法论   1篇
综合类   36篇
社会学   5篇
统计学   146篇
  2020年   4篇
  2019年   2篇
  2018年   5篇
  2017年   9篇
  2016年   3篇
  2014年   4篇
  2013年   74篇
  2012年   7篇
  2011年   3篇
  2010年   5篇
  2009年   6篇
  2008年   6篇
  2007年   6篇
  2006年   4篇
  2005年   7篇
  2004年   6篇
  2003年   3篇
  2002年   3篇
  2001年   1篇
  2000年   2篇
  1999年   2篇
  1998年   2篇
  1997年   1篇
  1996年   5篇
  1995年   6篇
  1994年   3篇
  1993年   5篇
  1991年   1篇
  1989年   1篇
  1988年   1篇
  1986年   1篇
  1984年   1篇
  1983年   2篇
  1982年   3篇
  1981年   2篇
  1979年   2篇
  1978年   1篇
  1977年   1篇
  1975年   1篇
排序方式: 共有201条查询结果,搜索用时 15 毫秒
101.
David J. Ball  John Watt 《Risk analysis》2013,33(11):2068-2078
Risk matrices are commonly encountered devices for rating hazards in numerous areas of risk management. Part of their popularity is predicated on their apparent simplicity and transparency. Recent research, however, has identified serious mathematical defects and inconsistencies. This article further examines the reliability and utility of risk matrices for ranking hazards, specifically in the context of public leisure activities including travel. We find that (1) different risk assessors may assign vastly different ratings to the same hazard, (2) even following lengthy reflection and learning scatter remains high, and (3) the underlying drivers of disparate ratings relate to fundamentally different worldviews, beliefs, and a panoply of psychosocial factors that are seldom explicitly acknowledged. It appears that risk matrices when used in this context may be creating no more than an artificial and even untrustworthy picture of the relative importance of hazards, which may be of little or no benefit to those trying to manage risk effectively and rationally.  相似文献   
102.
This paper describes a permutation procedure to test for the equality of selected elements of a covariance or correlation matrix across groups. It involves either centring or standardising each variable within each group before randomly permuting observations between groups. Since the assumption of exchangeability of observations between groups does not strictly hold following such transformations, Monte Carlo simulations were used to compare expected and empirical rejection levels as a function of group size, the number of groups and distribution type (Normal, mixtures of Normals and Gamma with various values of the shape parameter). The Monte Carlo study showed that the estimated probability levels are close to those that would be obtained with an exact test except at very small sample sizes (5 or 10 observations per group). The test appears robust against non-normal data, different numbers of groups or variables per group and unequal sample sizes per group. Power was increased with increasing sample size, effect size and the number of elements in the matrix and power was decreased with increasingly unequal numbers of observations per group.  相似文献   
103.
Although generalized exchange remains an emblematical model of alliance theory, characterizing matrimonial systems as pertaining to this model is tricky. The necessary condition of generalized exchange is the deliberate preference for asymmetric exchanges. Given a marriage dataset, can we determine whether the observed pattern is due to the realization of a social norm enjoining symmetric or asymmetric exchange or is the result of random processes? Here, relevant probabilities and indexes are established in the framework of graph theory, and are validated using a demographic individual-based model. The methods are applied to three datasets from the literature, allowing to assess with great confidence that the observed marriage configurations were not random.  相似文献   
104.
105.
106.
An algorithm to compute the autocovariance functions of periodic autoregressive moving average models is proposed. As a result, an easily implemented algorithm for the exact likelihood of these models is rendered possible.  相似文献   
107.
The problem considered is that of finding D-optimal design for the estimation of covariate parameters and the treatment and block contrasts in a block design set up in the presence of non stochastic controllable covariates, when N = 2(mod 4), N being the total number of observations. It is clear that when N ≠ 0 (mod 4), it is not possible to find designs attaining minimum variance for the estimated covariate parameters. Conditions for D-optimum designs for the estimation of covariate parameters were established when each of the covariates belongs to the interval [?1, 1]. Some constructions of D-optimal design have been provided for symmetric balanced incomplete block design (SBIBD) with parameters b = v, r = k = v ? 1, λ =v ? 2 when k = 2 (mod 4) and b is an odd integer.  相似文献   
108.
For a multivariate linear model, Wilk's likelihood ratio test (LRT) constitutes one of the cornerstone tools. However, the computation of its quantiles under the null or the alternative hypothesis requires complex analytic approximations, and more importantly, these distributional approximations are feasible only for moderate dimension of the dependent variable, say p≤20. On the other hand, assuming that the data dimension p as well as the number q of regression variables are fixed while the sample size n grows, several asymptotic approximations are proposed in the literature for Wilk's Λ including the widely used chi-square approximation. In this paper, we consider necessary modifications to Wilk's test in a high-dimensional context, specifically assuming a high data dimension p and a large sample size n. Based on recent random matrix theory, the correction we propose to Wilk's test is asymptotically Gaussian under the null hypothesis and simulations demonstrate that the corrected LRT has very satisfactory size and power, surely in the large p and large n context, but also for moderately large data dimensions such as p=30 or p=50. As a byproduct, we give a reason explaining why the standard chi-square approximation fails for high-dimensional data. We also introduce a new procedure for the classical multiple sample significance test in multivariate analysis of variance which is valid for high-dimensional data.  相似文献   
109.
In multi-parameter ( multivariate ) estimation, the Stein rule provides minimax and admissible estimators , compromising generally on their unbiasedness. On the other hand, the primary aim of jack-knifing is to reduce the bias of an estimator ( without necessarily compromising on its efficacy ), and, at the same time, jackknifing provides an estimator of the sampling variance of the estimator as well. In shrinkage estimation ( where minimization of a suitably defined risk function is the basic goal ), one may wonder how far the bias-reduction objective of jackknifing incorporates the dual objective of minimaxity ( or admissibility ) and estimating the risk of the estimator ? A critical appraisal of this basic role of jackknifing in shrinkage estimation is made here. Restricted, semi-restricted and the usual versions of jackknifed shrinkage estimates are considered and their performance characteristics are studied . It is shown that for Pitman-type ( local ) alternatives, usually, jackkntfing fails to provide a consistent estimator of the ( asymptotic ) risk of the shrinkage estimator, and a degenerate asymptotic situation arises for the usual fixed alternative case.  相似文献   
110.
One of the problems in bilinear time series (BLTS) analysis is that of identification. Unlike linear models, the identification in BLTS modelling is not always based on the autocorrelation function (or spectrum) since it is sometimes misleading, The authors, therefore., derive in this note the autocorrelation function of a function of a bilinear process which can be used for identification as well as for testing the linearity.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号