首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   836篇
  免费   34篇
管理学   11篇
人口学   9篇
丛书文集   4篇
理论方法论   2篇
综合类   16篇
社会学   7篇
统计学   821篇
  2022年   1篇
  2021年   7篇
  2020年   7篇
  2019年   24篇
  2018年   34篇
  2017年   53篇
  2016年   16篇
  2015年   22篇
  2014年   23篇
  2013年   340篇
  2012年   60篇
  2011年   28篇
  2010年   23篇
  2009年   35篇
  2008年   22篇
  2007年   19篇
  2006年   7篇
  2005年   20篇
  2004年   18篇
  2003年   9篇
  2002年   5篇
  2001年   9篇
  2000年   9篇
  1999年   12篇
  1998年   7篇
  1997年   5篇
  1996年   7篇
  1995年   3篇
  1994年   7篇
  1993年   1篇
  1992年   3篇
  1991年   3篇
  1987年   2篇
  1986年   1篇
  1985年   3篇
  1984年   7篇
  1983年   5篇
  1982年   1篇
  1981年   2篇
  1980年   3篇
  1979年   2篇
  1978年   4篇
  1977年   1篇
排序方式: 共有870条查询结果,搜索用时 62 毫秒
11.
Summary. Semiparametric mixed models are useful in biometric and econometric applications, especially for longitudinal data. Maximum penalized likelihood estimators (MPLEs) have been shown to work well by Zhang and co-workers for both linear coefficients and nonparametric functions. This paper considers the role of influence diagnostics in the MPLE by extending the case deletion and subject deletion analysis of linear models to accommodate the inclusion of a nonparametric component. We focus on influence measures for the fixed effects and provide formulae that are analogous to those for simpler models and readily computable with the MPLE algorithm. We also establish an equivalence between the case or subject deletion model and a mean shift outlier model from which we derive tests for outliers. The influence diagnostics proposed are illustrated through a longitudinal hormone study on progesterone and a simulated example.  相似文献   
12.
感知实绩、顾客满意与顾客忠诚--微观层次上的审视   总被引:8,自引:1,他引:8  
最近几十年来,对顾客满意-企业利润之间关系的认识不断深化,使得对顾客满意的研究得到了很大的发展.其中,顾客忠诚被认为是顾客满意影响企业利润的中介.然而,对于顾客满意影响顾客忠诚的微观过程,目前缺乏一致的结论.我们采用声誉、重复消费意向和推荐他人消费意向作为中间变量,对顾客感知实绩、满意和忠诚之间的因果联系进行了实证研究.结果表明,顾客满意同顾客忠诚之间并无直接的因果联系,因此,企业在制定市场份额策略时,不应该仅局限于提高感知实绩和顾客满意的范畴,而应从多个角度出发来提高顾客忠诚.  相似文献   
13.
To assess the influence of observations on the parameter estimates, case deletion diagnostics are commonly used in linear regression models. For linear models with correlated errors we study the influence of observations on testing a linear hypothesis using single and multiple case deletions. The change in likelihood ratio test and F test theoretically is derived and it is shown these tests to be completely determined by two proposed generalized externally studentized residuals. An illustrative example of a real data set is also reported.  相似文献   
14.
This paper proposes an overlapping-based test statistic for testing the equality of two exponential distributions with different scale and location parameters. The test statistic is defined as the maximum likelihood estimate of the Weitzman's overlapping coefficient, which estimates the agreement of two densities. The proposed test statistic is derived in closed form. Simulated critical points are generated for the proposed test statistic for various sample sizes and significance levels via Monte Carlo Simulations. Statistical powers of the proposed test are computed via simulation studies and compared to those of the existing Log likelihood ratio test.  相似文献   
15.
This work presents a study about the smoothness attained by the methods more frequently used to choose the smoothing parameter in the context of splines: Cross Validation, Generalized Cross Validation, and corrected Akaike and Bayesian Information Criteria, implemented with Penalized Least Squares. It is concluded that the amount of smoothness strongly depends on the length of the series and on the type of underlying trend, while the presence of seasonality even though statistically significant is less relevant. The intrinsic variability of the series is not statistically significant and its effect is taken into account only through the smoothing parameter.  相似文献   
16.

Item response models are essential tools for analyzing results from many educational and psychological tests. Such models are used to quantify the probability of correct response as a function of unobserved examinee ability and other parameters explaining the difficulty and the discriminatory power of the questions in the test. Some of these models also incorporate a threshold parameter for the probability of the correct response to account for the effect of guessing the correct answer in multiple choice type tests. In this article we consider fitting of such models using the Gibbs sampler. A data augmentation method to analyze a normal-ogive model incorporating a threshold guessing parameter is introduced and compared with a Metropolis-Hastings sampling method. The proposed method is an order of magnitude more efficient than the existing method. Another objective of this paper is to develop Bayesian model choice techniques for model discrimination. A predictive approach based on a variant of the Bayes factor is used and compared with another decision theoretic method which minimizes an expected loss function on the predictive space. A classical model choice technique based on a modified likelihood ratio test statistic is shown as one component of the second criterion. As a consequence the Bayesian methods proposed in this paper are contrasted with the classical approach based on the likelihood ratio test. Several examples are given to illustrate the methods.  相似文献   
17.

In this article, the validity of procedures for testing the significance of the slope in quantitative linear models with one explanatory variable and first-order autoregressive [AR(1)] errors is analyzed in a Monte Carlo study conducted in the time domain. Two cases are considered for the regressor: fixed and trended versus random and AR(1). In addition to the classical t -test using the Ordinary Least Squares (OLS) estimator of the slope and its standard error, we consider seven t -tests with n-2\,\hbox{df} built on the Generalized Least Squares (GLS) estimator or an estimated GLS estimator, three variants of the classical t -test with different variances of the OLS estimator, two asymptotic tests built on the Maximum Likelihood (ML) estimator, the F -test for fixed effects based on the Restricted Maximum Likelihood (REML) estimator in the mixed-model approach, two t -tests with n - 2 df based on first differences (FD) and first-difference ratios (FDR), and four modified t -tests using various corrections of the number of degrees of freedom. The FDR t -test, the REML F -test and the modified t -test using Dutilleul's effective sample size are the most valid among the testing procedures that do not assume the complete knowledge of the covariance matrix of the errors. However, modified t -tests are not applicable and the FDR t -test suffers from a lack of power when the regressor is fixed and trended ( i.e. , FDR is the same as FD in this case when observations are equally spaced), whereas the REML algorithm fails to converge at small sample sizes. The classical t -test is valid when the regressor is fixed and trended and autocorrelation among errors is predominantly negative, and when the regressor is random and AR(1), like the errors, and autocorrelation is moderately negative or positive. We discuss the results graphically, in terms of the circularity condition defined in repeated measures ANOVA and of the effective sample size used in correlation analysis with autocorrelated sample data. An example with environmental data is presented.  相似文献   
18.
Identical numerical integration experiments are performed on a CYBER 205 and an IBM 3081 in order to gauge the relative performance of several methods of integration. The methods employed are the general methods of Gauss-Legendre, iterated Gauss-Legendre, Newton-Cotes, Romberg and Monte Carlo as well as three methods, due to Owen, Dutt, and Clark respectively, for integrating the normal density. The bi- and trivariate normal densities and four other functions are integrated; the latter four have integrals expressible in closed form and some of them can be parameterized to exhibit singularities or highly periodic behavior. The various Gauss-Legendre methods tend to be most accurate (when applied to the normal density they are even more accurate than the special purpose methods designed for the normal) and while they are not the fastest, they are at least competitive. In scalar mode the CYBER is about 2-6 times faster than the IBM 3081 and the speed advantage of vectorised to scalar mode ranges from 6 to 15. Large scale econometric problems of the probit type should now be routinely soluble.  相似文献   
19.
In the estimation of a proportion p by group testing (pooled testing), retesting of units within positive groups has received little attention due to the minimal gain in precision compared to testing additional units. If acquisition of additional units is impractical or too expensive, and testing is not destructive, we show that retesting can be a useful option. We propose the retesting of a random grouping of units from positive groups, and compare it with nested halving procedures suggested by others. We develop an estimator of p for our proposed method, and examine its variance properties. Using simulation we compare retesting methods across a range of group testing situations, and show that for most realistic scenarios, our method is more efficient.  相似文献   
20.
The Hodrick–Prescott (HP) filtering is frequently used in macroeconometrics to decompose time series, such as real gross domestic product, into their trend and cyclical components. Because the HP filtering is a basic econometric tool, it is necessary to have a precise understanding of the nature of it. This article contributes to the literature by listing several (penalized) least-squares problems that are related to the HP filtering, three of which are newly introduced in the article, and showing their properties. We also remark on their generalization.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号