首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   211篇
  免费   6篇
管理学   19篇
人口学   2篇
丛书文集   2篇
理论方法论   1篇
综合类   31篇
社会学   1篇
统计学   161篇
  2023年   2篇
  2021年   5篇
  2020年   2篇
  2019年   7篇
  2018年   6篇
  2017年   9篇
  2016年   5篇
  2015年   12篇
  2014年   7篇
  2013年   46篇
  2012年   20篇
  2011年   15篇
  2010年   7篇
  2009年   10篇
  2008年   10篇
  2007年   4篇
  2006年   6篇
  2005年   5篇
  2004年   5篇
  2003年   4篇
  2002年   4篇
  2001年   2篇
  2000年   2篇
  1999年   1篇
  1998年   2篇
  1997年   2篇
  1995年   1篇
  1994年   4篇
  1993年   3篇
  1991年   2篇
  1989年   1篇
  1986年   1篇
  1984年   1篇
  1983年   1篇
  1981年   1篇
  1979年   2篇
排序方式: 共有217条查询结果,搜索用时 15 毫秒
21.
刘歆计量理论管窥   总被引:1,自引:0,他引:1  
刘歆在协助王莽改革度量衡的过程中,提出了系统的计量理论。其理论涉及到数在计量中的作用、音律本性及其与计量的关系、度量衡基准的选择依据以及度量衡标准器的设计等内客。其理论的核心部分被后人广泛接受,成为传统计量发展的圭臬,而该理论本身也成了中国古代传统计量理论形成的标志。  相似文献   
22.
Comparison of different estimation techniques for portfolio selection   总被引:1,自引:0,他引:1  
The main problem in applying the mean-variance portfolio selection consists of the fact that the first two moments of the asset returns are unknown. In practice the optimal portfolio weights have to be estimated. This is usually done by replacing the moments by the classical unbiased sample estimators. We provide a comparison of the exact and the asymptotic distributions of the estimated portfolio weights as well as a sensitivity analysis to shifts in the moments of the asset returns. Furthermore we consider several types of shrinkage estimators for the moments. The corresponding estimators of the portfolio weights are compared with each other and with the portfolio weights based on the sample estimators of the moments. We show how the uncertainty about the portfolio weights can be introduced into the performance measurement of trading strategies. The methodology explains the bad out-of-sample performance of the classical Markowitz procedures.  相似文献   
23.
Canonical variate analysis often involves the construction of confidence regions round points representing group means in a 2-dimensional plot. Traditionally circles have always been constructed, but some authors have recently advocated ellipses as being more appropriate. This paper describes a Monte Carlo study investigating the effect of a range of factors on the inclusion rates of true population means within both types of region for normal data. The traditional circles do not perform too badly within a restricted range, but they are nearly always under-included. The ellipses usually have higher inclusion rates, and so are often closer to the nominal rate, but are sometimes over-included.  相似文献   
24.
针对传统基于判断矩阵的专家模糊核聚类赋权方法,由于归一化条件的制约,导致离群点对聚类结果产生不良影响的问题,提出一种改进型模糊核聚类算法。该方法在聚类过程中,通过放宽归一化约束条件,削弱离群点对聚类结果的影响;并且针对传统基于信息熵与一致性系数线性耦合的聚类标准的局限性,提出一种基于偏差熵的赋权方法,依据专家对自身类别的聚类贡献度,确定专家权重,克服了传统方法的不足。算例表明,该方法可行、有效。  相似文献   
25.
In this paper, we consider the problem of model robust design for simultaneous parameter estimation among a class of polynomial regression models with degree up to k. A generalized D-optimality criterion, the Ψα‐optimality criterion, first introduced by Läuter (1974) is considered for this problem. By applying the theory of canonical moments and the technique of maximin principle, we derive a model robust optimal design in the sense of having highest minimum Ψα‐efficiency. Numerical comparison indicates that the proposed design has remarkable performance for parameter estimation in all of the considered rival models.  相似文献   
26.
Staudte  R.G.  Zhang  J. 《Lifetime data analysis》1997,3(4):383-398
The p-value evidence for an alternative to a null hypothesis regarding the mean lifetime can be unreliable if based on asymptotic approximations when there is only a small sample of right-censored exponential data. However, a guarded weight of evidence for the alternative can always be obtained without approximation, no matter how small the sample, and has some other advantages over p-values. Weights of evidence are defined as estimators of 0 when the null hypothesis is true and 1 when the alternative is true, and they are judged on the basis of the ensuing risks, where risk is mean squared error of estimation. The evidence is guarded in that a preassigned bound is placed on the risk under the hypothesis. Practical suggestions are given for choosing the bound and for interpreting the magnitude of the weight of evidence. Acceptability profiles are obtained by inversion of a family of guarded weights of evidence for two-sided alternatives to point hypotheses, just as confidence intervals are obtained from tests; these profiles are arguably more informative than confidence intervals, and are easily determined for any level and any sample size, however small. They can help understand the effects of different amounts of censoring. They are found for several small size data sets, including a sample of size 12 for post-operative cancer patients. Both singly Type I and Type II censored examples are included. An examination of the risk functions of these guarded weights of evidence suggests that if the censoring time is of the same magnitude as the mean lifetime, or larger, then the risks in using a guarded weight of evidence based on a likelihood ratio are not much larger than they would be if the parameter were known.  相似文献   
27.
This paper presents a method of discriminant analysis especially suited to longitudinal data. The approach is in the spirit of canonical variate analysis (CVA) and is similarly intended to reduce the dimensionality of multivariate data while retaining information about group differences. A drawback of CVA is that it does not take advantage of special structures that may be anticipated in certain types of data. For longitudinal data, it is often appropriate to specify a growth curve structure (as given, for example, in the model of Potthoff & Roy, 1964). The present paper focuses on this growth curve structure, utilizing it in a model-based approach to discriminant analysis. For this purpose the paper presents an extension of the reduced-rank regression model, referred to as the reduced-rank growth curve (RRGC) model. It estimates discriminant functions via maximum likelihood and gives a procedure for determining dimensionality. This methodology is exploratory only, and is illustrated by a well-known dataset from Grizzle & Allen (1969).  相似文献   
28.
For a moderate or large number of regression coefficients, shrinkage estimates towards an overall mean are obtained by Bayes and empirical Bayes methods. For a special case, the Bayes and empirical Bayes shrinking weights are shown to be asymptotically equivalent as the amount of shrinkage goes to zero. Based on comparisons between Bayes and empirical Bayes solutions, a modification of the empirical Bayes shrinking weights designed to guard against unreasonable overshrinking is suggested. A numerical example is given.  相似文献   
29.
Summary.  Functional magnetic resonance imaging has become a standard technology in human brain mapping. Analyses of the massive spatiotemporal functional magnetic resonance imaging data sets often focus on parametric or non-parametric modelling of the temporal component, whereas spatial smoothing is based on Gaussian kernels or random fields. A weakness of Gaussian spatial smoothing is underestimation of activation peaks or blurring of high curvature transitions between activated and non-activated regions of the brain. To improve spatial adaptivity, we introduce a class of inhomogeneous Markov random fields with stochastic interaction weights in a space-varying coefficient model. For given weights, the random field is conditionally Gaussian, but marginally it is non-Gaussian. Fully Bayesian inference, including estimation of weights and variance parameters, can be carried out through efficient Markov chain Monte Carlo simulation. Although motivated by the analysis of functional magnetic resonance imaging data, the methodological development is general and can also be used for spatial smoothing and regression analysis of areal data on irregular lattices. An application to stylized artificial data and to real functional magnetic resonance imaging data from a visual stimulation experiment demonstrates the performance of our approach in comparison with Gaussian and robustified non-Gaussian Markov random-field models.  相似文献   
30.
Abstract

In the present paper we develop bootstrap tests of hypothesis, based on simulation, for the transition probability matrix arising in the context of a multi-state model. The bootstrap test statistic is based on the paper of Tattar and Vaman (2008 Tattar, P. N., Vaman, H. J. (2008). Testing transition probability matrix of a multi-state model with censored data. Lifetime Data Anal. 14(2):216230.[Crossref], [PubMed], [Web of Science ®] [Google Scholar]), which develops a statistic for the testing problems concerning the transition probability matrix of the non homogeneous Markov process.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号