首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   1636篇
  免费   46篇
  国内免费   2篇
管理学   58篇
人口学   1篇
丛书文集   21篇
理论方法论   9篇
综合类   163篇
社会学   24篇
统计学   1408篇
  2023年   12篇
  2022年   13篇
  2021年   15篇
  2020年   17篇
  2019年   56篇
  2018年   64篇
  2017年   100篇
  2016年   42篇
  2015年   23篇
  2014年   51篇
  2013年   416篇
  2012年   115篇
  2011年   63篇
  2010年   46篇
  2009年   58篇
  2008年   58篇
  2007年   55篇
  2006年   49篇
  2005年   59篇
  2004年   51篇
  2003年   42篇
  2002年   34篇
  2001年   34篇
  2000年   32篇
  1999年   22篇
  1998年   21篇
  1997年   24篇
  1996年   11篇
  1995年   14篇
  1994年   12篇
  1993年   6篇
  1992年   8篇
  1991年   10篇
  1990年   3篇
  1989年   1篇
  1988年   5篇
  1987年   6篇
  1986年   2篇
  1985年   4篇
  1984年   6篇
  1983年   8篇
  1982年   5篇
  1981年   3篇
  1980年   4篇
  1979年   2篇
  1978年   2篇
排序方式: 共有1684条查询结果,搜索用时 62 毫秒
31.
To assess the influence of observations on the parameter estimates, case deletion diagnostics are commonly used in linear regression models. For linear models with correlated errors we study the influence of observations on testing a linear hypothesis using single and multiple case deletions. The change in likelihood ratio test and F test theoretically is derived and it is shown these tests to be completely determined by two proposed generalized externally studentized residuals. An illustrative example of a real data set is also reported.  相似文献   
32.
In this paper, we study the identification of Bayesian regression models, when an ordinal covariate is subject to unidirectional misclassification. Xia and Gustafson [Bayesian regression models adjusting for unidirectional covariate misclassification. Can J Stat. 2016;44(2):198–218] obtained model identifiability for non-binary regression models, when there is a binary covariate subject to unidirectional misclassification. In the current paper, we establish the moment identifiability of regression models for misclassified ordinal covariates with more than two categories, based on forms of observable moments. Computational studies are conducted that confirm the theoretical results. We apply the method to two datasets, one from the Medical Expenditure Panel Survey (MEPS), and the other from Translational Research Investigating Underlying Disparities in Acute Myocardial infarction Patients Health Status (TRIUMPH).  相似文献   
33.
This paper proposes an overlapping-based test statistic for testing the equality of two exponential distributions with different scale and location parameters. The test statistic is defined as the maximum likelihood estimate of the Weitzman's overlapping coefficient, which estimates the agreement of two densities. The proposed test statistic is derived in closed form. Simulated critical points are generated for the proposed test statistic for various sample sizes and significance levels via Monte Carlo Simulations. Statistical powers of the proposed test are computed via simulation studies and compared to those of the existing Log likelihood ratio test.  相似文献   
34.
We introduce two classes of multivariate log-skewed distributions with normal kernel: the log canonical fundamental skew-normal (log-CFUSN) and the log unified skew-normal. We also discuss some properties of the log-CFUSN family of distributions. These new classes of log-skewed distributions include the log-normal and multivariate log-skew normal families as particular cases. We discuss some issues related to Bayesian inference in the log-CFUSN family of distributions, mainly we focus on how to model the prior uncertainty about the skewing parameter. Based on the stochastic representation of the log-CFUSN family, we propose a data augmentation strategy for sampling from the posterior distributions. This proposed family is used to analyse the US national monthly precipitation data. We conclude that a high-dimensional skewing function lead to a better model fit.  相似文献   
35.
In the context of an objective Bayesian approach to the multinomial model, Dirichlet(a, …, a) priors with a < 1 have previously been shown to be inadequate in the presence of zero counts, suggesting that the uniform prior (a = 1) is the preferred candidate. In the presence of many zero counts, however, this prior may not be satisfactory either. A model selection approach is proposed, allowing for the possibility of zero parameters corresponding to zero count categories. This approach results in a posterior mixture of Dirichlet distributions and marginal mixtures of beta distributions, which seem to avoid the problems that potentially result from the various proposed Dirichlet priors, in particular in the context of extreme data with zero counts.  相似文献   
36.

Item response models are essential tools for analyzing results from many educational and psychological tests. Such models are used to quantify the probability of correct response as a function of unobserved examinee ability and other parameters explaining the difficulty and the discriminatory power of the questions in the test. Some of these models also incorporate a threshold parameter for the probability of the correct response to account for the effect of guessing the correct answer in multiple choice type tests. In this article we consider fitting of such models using the Gibbs sampler. A data augmentation method to analyze a normal-ogive model incorporating a threshold guessing parameter is introduced and compared with a Metropolis-Hastings sampling method. The proposed method is an order of magnitude more efficient than the existing method. Another objective of this paper is to develop Bayesian model choice techniques for model discrimination. A predictive approach based on a variant of the Bayes factor is used and compared with another decision theoretic method which minimizes an expected loss function on the predictive space. A classical model choice technique based on a modified likelihood ratio test statistic is shown as one component of the second criterion. As a consequence the Bayesian methods proposed in this paper are contrasted with the classical approach based on the likelihood ratio test. Several examples are given to illustrate the methods.  相似文献   
37.

In this article, the validity of procedures for testing the significance of the slope in quantitative linear models with one explanatory variable and first-order autoregressive [AR(1)] errors is analyzed in a Monte Carlo study conducted in the time domain. Two cases are considered for the regressor: fixed and trended versus random and AR(1). In addition to the classical t -test using the Ordinary Least Squares (OLS) estimator of the slope and its standard error, we consider seven t -tests with n-2\,\hbox{df} built on the Generalized Least Squares (GLS) estimator or an estimated GLS estimator, three variants of the classical t -test with different variances of the OLS estimator, two asymptotic tests built on the Maximum Likelihood (ML) estimator, the F -test for fixed effects based on the Restricted Maximum Likelihood (REML) estimator in the mixed-model approach, two t -tests with n - 2 df based on first differences (FD) and first-difference ratios (FDR), and four modified t -tests using various corrections of the number of degrees of freedom. The FDR t -test, the REML F -test and the modified t -test using Dutilleul's effective sample size are the most valid among the testing procedures that do not assume the complete knowledge of the covariance matrix of the errors. However, modified t -tests are not applicable and the FDR t -test suffers from a lack of power when the regressor is fixed and trended ( i.e. , FDR is the same as FD in this case when observations are equally spaced), whereas the REML algorithm fails to converge at small sample sizes. The classical t -test is valid when the regressor is fixed and trended and autocorrelation among errors is predominantly negative, and when the regressor is random and AR(1), like the errors, and autocorrelation is moderately negative or positive. We discuss the results graphically, in terms of the circularity condition defined in repeated measures ANOVA and of the effective sample size used in correlation analysis with autocorrelated sample data. An example with environmental data is presented.  相似文献   
38.
ABSTRACT

We consider the use of modern likelihood asymptotics in the construction of confidence intervals for the parameter which determines the skewness of the distribution of the maximum/minimum of an exchangeable bivariate normal random vector. Simulation studies were conducted to investigate the accuracy of the proposed methods and to compare them to available alternatives. Accuracy is evaluated in terms of both coverage probability and expected length of the interval. We furthermore illustrate the suitability of our proposals by means of two data sets, consisting of, respectively, measurements taken on the brains of 10 mono-zygotic twins and measurements of mineral content of bones in the dominant and non-dominant arms for 25 elderly women.  相似文献   
39.
Empirical Bayes estimates of the local false discovery rate can reflect uncertainty about the estimated prior by supplementing their Bayesian posterior probabilities with confidence levels as posterior probabilities. This use of coherent fiducial inference with hierarchical models generates set estimators that propagate uncertainty to varying degrees. Some of the set estimates approach estimates from plug-in empirical Bayes methods for high numbers of comparisons and can come close to the usual confidence sets given a sufficiently low number of comparisons.  相似文献   
40.
Identical numerical integration experiments are performed on a CYBER 205 and an IBM 3081 in order to gauge the relative performance of several methods of integration. The methods employed are the general methods of Gauss-Legendre, iterated Gauss-Legendre, Newton-Cotes, Romberg and Monte Carlo as well as three methods, due to Owen, Dutt, and Clark respectively, for integrating the normal density. The bi- and trivariate normal densities and four other functions are integrated; the latter four have integrals expressible in closed form and some of them can be parameterized to exhibit singularities or highly periodic behavior. The various Gauss-Legendre methods tend to be most accurate (when applied to the normal density they are even more accurate than the special purpose methods designed for the normal) and while they are not the fastest, they are at least competitive. In scalar mode the CYBER is about 2-6 times faster than the IBM 3081 and the speed advantage of vectorised to scalar mode ranges from 6 to 15. Large scale econometric problems of the probit type should now be routinely soluble.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号