首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   1164篇
  免费   27篇
  国内免费   3篇
管理学   89篇
人口学   5篇
丛书文集   1篇
理论方法论   1篇
综合类   120篇
社会学   6篇
统计学   972篇
  2023年   1篇
  2022年   4篇
  2021年   5篇
  2020年   8篇
  2019年   23篇
  2018年   31篇
  2017年   64篇
  2016年   23篇
  2015年   25篇
  2014年   40篇
  2013年   398篇
  2012年   88篇
  2011年   28篇
  2010年   21篇
  2009年   31篇
  2008年   39篇
  2007年   23篇
  2006年   21篇
  2005年   38篇
  2004年   28篇
  2003年   16篇
  2002年   27篇
  2001年   24篇
  2000年   14篇
  1999年   27篇
  1998年   22篇
  1997年   23篇
  1996年   10篇
  1995年   8篇
  1994年   12篇
  1993年   10篇
  1992年   15篇
  1991年   7篇
  1990年   3篇
  1989年   2篇
  1988年   4篇
  1987年   4篇
  1986年   1篇
  1985年   3篇
  1984年   6篇
  1983年   7篇
  1981年   5篇
  1980年   3篇
  1979年   1篇
  1978年   1篇
排序方式: 共有1194条查询结果,搜索用时 906 毫秒
21.
This paper provides a saddlepoint approximation to the distribution of the sample version of Kendall's τ, which is a measure of association between two samples. The saddlepoint approximation is compared with the Edgeworth and the normal approximations, and with the bootstrap resampling distribution. A numerical study shows that with small sample sizes the saddlepoint approximation outperforms both the normal and the Edgeworth approximations. This paper gives also an analytical comparison between approximated and exact cumulants of the sample Kendall's τ when the two samples are independent.  相似文献   
22.

Item response models are essential tools for analyzing results from many educational and psychological tests. Such models are used to quantify the probability of correct response as a function of unobserved examinee ability and other parameters explaining the difficulty and the discriminatory power of the questions in the test. Some of these models also incorporate a threshold parameter for the probability of the correct response to account for the effect of guessing the correct answer in multiple choice type tests. In this article we consider fitting of such models using the Gibbs sampler. A data augmentation method to analyze a normal-ogive model incorporating a threshold guessing parameter is introduced and compared with a Metropolis-Hastings sampling method. The proposed method is an order of magnitude more efficient than the existing method. Another objective of this paper is to develop Bayesian model choice techniques for model discrimination. A predictive approach based on a variant of the Bayes factor is used and compared with another decision theoretic method which minimizes an expected loss function on the predictive space. A classical model choice technique based on a modified likelihood ratio test statistic is shown as one component of the second criterion. As a consequence the Bayesian methods proposed in this paper are contrasted with the classical approach based on the likelihood ratio test. Several examples are given to illustrate the methods.  相似文献   
23.

In this article, the validity of procedures for testing the significance of the slope in quantitative linear models with one explanatory variable and first-order autoregressive [AR(1)] errors is analyzed in a Monte Carlo study conducted in the time domain. Two cases are considered for the regressor: fixed and trended versus random and AR(1). In addition to the classical t -test using the Ordinary Least Squares (OLS) estimator of the slope and its standard error, we consider seven t -tests with n-2\,\hbox{df} built on the Generalized Least Squares (GLS) estimator or an estimated GLS estimator, three variants of the classical t -test with different variances of the OLS estimator, two asymptotic tests built on the Maximum Likelihood (ML) estimator, the F -test for fixed effects based on the Restricted Maximum Likelihood (REML) estimator in the mixed-model approach, two t -tests with n - 2 df based on first differences (FD) and first-difference ratios (FDR), and four modified t -tests using various corrections of the number of degrees of freedom. The FDR t -test, the REML F -test and the modified t -test using Dutilleul's effective sample size are the most valid among the testing procedures that do not assume the complete knowledge of the covariance matrix of the errors. However, modified t -tests are not applicable and the FDR t -test suffers from a lack of power when the regressor is fixed and trended ( i.e. , FDR is the same as FD in this case when observations are equally spaced), whereas the REML algorithm fails to converge at small sample sizes. The classical t -test is valid when the regressor is fixed and trended and autocorrelation among errors is predominantly negative, and when the regressor is random and AR(1), like the errors, and autocorrelation is moderately negative or positive. We discuss the results graphically, in terms of the circularity condition defined in repeated measures ANOVA and of the effective sample size used in correlation analysis with autocorrelated sample data. An example with environmental data is presented.  相似文献   
24.
Several procedures have been proposed for testing the hypothesis that all off-diagonal elements of the correlation matrix of a multivariate normal distribution are equal. If the hypothesis of equal correlation can be accepted, it is then of interest to estimate and perhaps test hypotheses for the common correlation. In this paper, two versions of five different test statistics are compared via simulation in terms of adequacy of the normal approximation, coverage probabilities of confidence intervals, control of Type I error, and power. The results indicate that two test statistics based on the average of the Fisher z-transforms of the sample correlations should be used in most cases. A statistic based on the sample eigenvalues also gives reasonable results for confidence intervals and lower-tailed tests.  相似文献   
25.
Identical numerical integration experiments are performed on a CYBER 205 and an IBM 3081 in order to gauge the relative performance of several methods of integration. The methods employed are the general methods of Gauss-Legendre, iterated Gauss-Legendre, Newton-Cotes, Romberg and Monte Carlo as well as three methods, due to Owen, Dutt, and Clark respectively, for integrating the normal density. The bi- and trivariate normal densities and four other functions are integrated; the latter four have integrals expressible in closed form and some of them can be parameterized to exhibit singularities or highly periodic behavior. The various Gauss-Legendre methods tend to be most accurate (when applied to the normal density they are even more accurate than the special purpose methods designed for the normal) and while they are not the fastest, they are at least competitive. In scalar mode the CYBER is about 2-6 times faster than the IBM 3081 and the speed advantage of vectorised to scalar mode ranges from 6 to 15. Large scale econometric problems of the probit type should now be routinely soluble.  相似文献   
26.
We present a variational estimation method for the mixed logistic regression model. The method is based on a lower bound approximation of the logistic function [Jaakkola, J.S. and Jordan, M.I., 2000, Bayesian parameter estimation via variational methods. Statistics & Computing, 10, 25–37.]. Based on the approximation, an EM algorithm can be derived that results in a considerable simplification of the maximization problem in that it does not require the numerical evaluation of integrals over the random effects. We assess the performance of the variational method for the mixed logistic regression model in a simulation study and an empirical data example, and compare it to Laplace's method. The results indicate that the variational method is a viable choice for estimating the fixed effects of the mixed logistic regression model under the condition that the number of outcomes within each cluster is sufficiently high.  相似文献   
27.
It is known that the normal approximation is applicable for sums of non negative random variables, W, with the commonly employed couplings. In this work, we use the Stein’s method to obtain a general theorem of non uniform exponential bound on normal approximation base on monotone size bias couplings of W. Applications of the main result to give the bound on normal approximation for binomial random variable, the number of bulbs on at the terminal time in the lightbulb process, and the number of m runs are also provided.  相似文献   
28.
Measures of statistical divergence are used to assess mutual similarities between distributions of multiple variables through a variety of methodologies including Shannon entropy and Csiszar divergence. Modified measures of statistical divergence are introduced throughout the present article. Those modified measures are related to the Lin–Wong (LW) divergence applied on the past lifetime data. Accordingly, the relationship between Fisher information and the LW divergence measure was explored when applied on the past lifetime data. Throughout this study, a number of relations are proposed between various assessment methods which implement the Jensen–Shannon, Jeffreys, and Hellinger divergence measures. Also, relations between the LW measure and the Kullback–Leibler (KL) measures for past lifetime data were examined. Furthermore, the present study discusses the relationship between the proposed ordering scheme and the distance interval between LW and KL measures under certain conditions.  相似文献   
29.
This paper presents some powerful omnibus tests for multivariate normality based on the likelihood ratio and the characterizations of the multivariate normal distribution. The power of the proposed tests is studied against various alternatives via Monte Carlo simulations. Simulation studies show our tests compare well with other powerful tests including multivariate versions of the Shapiro–Wilk test and the Anderson–Darling test.  相似文献   
30.
When we are given only a transform such as the moment-generating function of a distribution, it is rare that we can efficiently simulate random variables. Possible approaches such as the inverse transform using numerical inversion of the transform are computationally very expensive. However, the saddlepoint approximation is known to be exact for the Normal, Gamma, and inverse Gaussian distribution and remarkably accurate for a large number of others. We explore the efficient use of the saddlepoint approximation for simulating distributions and provide three examples of the accuracy of these simulations.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号