首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   1250篇
  免费   27篇
  国内免费   3篇
管理学   71篇
民族学   2篇
人才学   1篇
人口学   10篇
丛书文集   26篇
理论方法论   15篇
综合类   326篇
社会学   88篇
统计学   741篇
  2023年   12篇
  2022年   6篇
  2021年   2篇
  2020年   25篇
  2019年   45篇
  2018年   39篇
  2017年   57篇
  2016年   30篇
  2015年   24篇
  2014年   46篇
  2013年   343篇
  2012年   93篇
  2011年   42篇
  2010年   35篇
  2009年   54篇
  2008年   52篇
  2007年   60篇
  2006年   48篇
  2005年   45篇
  2004年   34篇
  2003年   37篇
  2002年   38篇
  2001年   18篇
  2000年   12篇
  1999年   11篇
  1998年   13篇
  1997年   8篇
  1996年   9篇
  1995年   5篇
  1994年   2篇
  1993年   8篇
  1992年   4篇
  1991年   1篇
  1990年   1篇
  1989年   2篇
  1988年   2篇
  1987年   3篇
  1986年   1篇
  1985年   3篇
  1984年   4篇
  1983年   1篇
  1982年   2篇
  1978年   1篇
  1977年   1篇
  1976年   1篇
排序方式: 共有1280条查询结果,搜索用时 593 毫秒
171.
This article develops nonparametric tests of independence between two stochastic processes satisfying β-mixing conditions. The testing strategy boils down to gauging the closeness between the joint and the product of the marginal stationary densities. For that purpose, we take advantage of a generalized entropic measure so as to build a whole family of nonparametric tests of independence. We derive asymptotic normality and local power using the functional delta method for kernels. As a corollary, we also develop a class of entropy-based tests for serial independence. The latter are nuisance parameter free, and hence also qualify for dynamic misspecification analyses. We then investigate the finite-sample properties of our serial independence tests through Monte Carlo simulations. They perform quite well, entailing more power against some nonlinear AR alternatives than two popular nonparametric serial-independence tests.  相似文献   
172.
《Econometric Reviews》2013,32(4):325-340
Abstract

Nonnested models are sometimes tested using a simulated reference distribution for the uncentred log likelihood ratio statistic. This approach has been recommended for the specific problem of testing linear and logarithmic regression models. The general asymptotic validity of the reference distribution test under correct choice of error distributions is questioned. The asymptotic behaviour of the test under incorrect assumptions about error distributions is also examined. In order to complement these analyses, Monte Carlo results for the case of linear and logarithmic regression models are provided. The finite sample properties of several standard tests for testing these alternative functional forms are also studied, under normal and nonnormal error distributions. These regression-based variable-addition tests are implemented using asymptotic and bootstrap critical values.  相似文献   
173.
The power-generalized Weibull probability distribution is very often used in survival analysis mainly because different values of its parameters allow for various shapes of hazard rate such as monotone increasing/decreasing, ∩-shaped, ∪-shaped, or constant. Modified chi-squared tests based on maximum likelihood estimators of parameters that are shown to be -consistent are proposed. Power of these tests against exponentiated Weibull, three-parameter Weibull, and generalized Weibull distributions is studied using Monte Carlo simulations. It is proposed to use the left-tailed rejection region because these tests are biased with respect to the above alternatives if one will use the right-tailed rejection region. It is also shown that power of the McCulloch test investigated can be two or three times higher than that of Nikulin–Rao–Robson test with respect to the alternatives considered if expected cell frequencies are about 5.  相似文献   
174.
This article develops a new cumulative sum statistic to identify aberrant behavior in a sequentially administered multiple-choice standardized examination. The examination responses can be described as finite Poisson trials, and the statistic can be used for other applications which fit this framework. The standardized examination setting uses a maximum likelihood estimate of examinee ability and an item response theory model. Aberrant and non aberrant probabilities are computed by an odds ratio analogous to risk adjusted CUSUM schemes. The significance level of a hypothesis test, where the null hypothesis is non-aberrant examinee behavior, is computed with Markov chains. A smoothing process is used to spread probabilities across the Markov states. The practicality of the approach to detect aberrant examinee behavior is demonstrated with results from both simulated and empirical data.  相似文献   
175.
176.
In this paper a specification strategy is proposed for the determination of the orders in ARMA models. The strategy is based on two newly defined concepts: the q-conditioned partial auto-regressive function and the p-conditioned partial moving average function. These concepts are similar to the generalized partial autocorrelation function which has been recently suggested for order determination. The main difference is that they are defined and employed in connection with an asymptotically efficient estimation method instead of the rather inefficient generalized Yule-Walker method. The specification is performed by using sequential Wald type tests. In contrast to the traditional testing of hypotheses, these tests use critical values which increase with the sample size at an appropriate rate  相似文献   
177.
Two simple tests which allow for unequal sample sizes are considered for testing hypothesis for the common mean of two normal populations. The first test is an exact test of size a based on two available t-statistics based on single samples made exact through random allocation of α among the two available t-tests. The test statistic of the second test is a weighted average of two available t-statistics with random weights. It is shown that the first test is more efficient than the available two t-tests with respect to Bahadur asymptotic relative efficiency. It is also shown that the null distribution of the test statistic in the second test, which is similar to the one based on the normalized Graybill-Deal test statistic, converges to a standard normal distribution. Finally, we compare the small sample properties of these tests, those given in Zhou and Mat hew (1993), and some tests given in Cohen and Sackrowitz (1984) in a simulation study. In this study, we find that the second test performs better than the tests given in Zhou and Mathew (1993) and is comparable to the ones given in Cohen and Sackrowitz (1984) with respect to power..  相似文献   
178.
Abstract

In a quantitative linear model with errors following a stationary Gaussian, first-order autoregressive or AR(1) process, Generalized Least Squares (GLS) on raw data and Ordinary Least Squares (OLS) on prewhitened data are efficient methods of estimation of the slope parameters when the autocorrelation parameter of the error AR(1) process, ρ, is known. In practice, ρ is generally unknown. In the so-called two-stage estimation procedures, ρ is then estimated first before using the estimate of ρ to transform the data and estimate the slope parameters by OLS on the transformed data. Different estimators of ρ have been considered in previous studies. In this article, we study nine two-stage estimation procedures for their efficiency in estimating the slope parameters. Six of them (i.e., three noniterative, three iterative) are based on three estimators of ρ that have been considered previously. Two more (i.e., one noniterative, one iterative) are based on a new estimator of ρ that we propose: it is provided by the sample autocorrelation coefficient of the OLS residuals at lag 1, denoted r(1). Lastly, REstricted Maximum Likelihood (REML) represents a different type of two-stage estimation procedure whose efficiency has not been compared to the others yet. We also study the validity of the testing procedures derived from GLS and the nine two-stage estimation procedures. Efficiency and validity are analyzed in a Monte Carlo study. Three types of explanatory variable x in a simple quantitative linear model with AR(1) errors are considered in the time domain: Case 1, x is fixed; Case 2, x is purely random; and Case 3, x follows an AR(1) process with the same autocorrelation parameter value as the error AR(1) process. In a preliminary step, the number of inadmissible estimates and the efficiency of the different estimators of ρ are compared empirically, whereas their approximate expected value in finite samples and their asymptotic variance are derived theoretically. Thereafter, the efficiency of the estimation procedures and the validity of the derived testing procedures are discussed in terms of the sample size and the magnitude and sign of ρ. The noniterative two-stage estimation procedure based on the new estimator of ρ is shown to be more efficient for moderate values of ρ at small sample sizes. With the exception of small sample sizes, REML and its derived F-test perform the best overall. The asymptotic equivalence of two-stage estimation procedures, besides REML, is observed empirically. Differences related to the nature, fixed or random (uncorrelated or autocorrelated), of the explanatory variable are also discussed.  相似文献   
179.
This article presents a bivariate distribution for analyzing the failure data of mechanical and electrical components in presence of a forewarning or primer event whose occurrence denotes the inception of the failure mechanism that will cause the component failure after an additional random time. The characteristics of the proposed distribution are discussed and several point estimators of parameters are illustrated and compared, in case of complete sampling, via a large Monte Carlo simulation study. Confidence intervals based on asymptotic results are derived, as well as procedures are given for testing the independence between the occurrence time of the forewarning event and the additional time to failure. Numerical applications based on failure data of cable insulation specimens and of two-component parallel systems are illustrated.  相似文献   
180.
Approximations to the noncentral F distribution yield surprisingly accurate results for power and sample size problems arising from linear hypotheses about normal random variables. The approximations are easy to use with a desk (or hand-held) calculator that computes cumulative F probabilities. These approximations are particularly advantageous for testing the hypothesis that differences among the means are small against the alternative that the differences are large.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号